00:00:00.001 Started by upstream project "autotest-per-patch" build number 121259 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21682 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.093 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.163 Fetching changes from the remote Git repository 00:00:00.164 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.222 Using shallow fetch with depth 1 00:00:00.222 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.222 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/4 # timeout=5 00:00:06.204 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.213 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.226 Checking out Revision 9a89b74058758bad3d12019ff5b47fa0c915a5eb (FETCH_HEAD) 00:00:06.226 > git config core.sparsecheckout # timeout=10 00:00:06.237 > git read-tree -mu HEAD # timeout=10 00:00:06.251 > git checkout -f 9a89b74058758bad3d12019ff5b47fa0c915a5eb # timeout=5 00:00:06.267 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:06.267 > git rev-list --no-walk 352f638cc5f3ff89bb1b1ec8306986452d7550bf # timeout=10 00:00:06.359 [Pipeline] Start of Pipeline 00:00:06.374 [Pipeline] library 00:00:06.376 Loading library shm_lib@master 00:00:06.376 Library shm_lib@master is cached. Copying from home. 00:00:06.393 [Pipeline] node 00:00:06.424 Running on GP12 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.425 [Pipeline] { 00:00:06.438 [Pipeline] catchError 00:00:06.439 [Pipeline] { 00:00:06.449 [Pipeline] wrap 00:00:06.459 [Pipeline] { 00:00:06.470 [Pipeline] stage 00:00:06.472 [Pipeline] { (Prologue) 00:00:06.704 [Pipeline] sh 00:00:07.672 + logger -p user.info -t JENKINS-CI 00:00:07.699 [Pipeline] echo 00:00:07.701 Node: GP12 00:00:07.709 [Pipeline] sh 00:00:08.059 [Pipeline] setCustomBuildProperty 00:00:08.071 [Pipeline] echo 00:00:08.073 Cleanup processes 00:00:08.078 [Pipeline] sh 00:00:08.374 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.374 9366 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.392 [Pipeline] sh 00:00:08.684 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.684 ++ grep -v 'sudo pgrep' 00:00:08.684 ++ awk '{print $1}' 00:00:08.684 + sudo kill -9 00:00:08.684 + true 00:00:08.701 [Pipeline] cleanWs 00:00:08.711 [WS-CLEANUP] Deleting project workspace... 00:00:08.711 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.725 [WS-CLEANUP] done 00:00:08.729 [Pipeline] setCustomBuildProperty 00:00:08.742 [Pipeline] sh 00:00:09.033 + sudo git config --global --replace-all safe.directory '*' 00:00:09.109 [Pipeline] nodesByLabel 00:00:09.110 Found a total of 1 nodes with the 'sorcerer' label 00:00:09.121 [Pipeline] httpRequest 00:00:09.405 HttpMethod: GET 00:00:09.405 URL: http://10.211.164.96/packages/jbp_9a89b74058758bad3d12019ff5b47fa0c915a5eb.tar.gz 00:00:10.295 Sending request to url: http://10.211.164.96/packages/jbp_9a89b74058758bad3d12019ff5b47fa0c915a5eb.tar.gz 00:00:10.630 Response Code: HTTP/1.1 200 OK 00:00:10.721 Success: Status code 200 is in the accepted range: 200,404 00:00:10.722 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_9a89b74058758bad3d12019ff5b47fa0c915a5eb.tar.gz 00:00:15.290 [Pipeline] sh 00:00:15.589 + tar --no-same-owner -xf jbp_9a89b74058758bad3d12019ff5b47fa0c915a5eb.tar.gz 00:00:15.608 [Pipeline] httpRequest 00:00:15.615 HttpMethod: GET 00:00:15.616 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:15.620 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:15.645 Response Code: HTTP/1.1 200 OK 00:00:15.646 Success: Status code 200 is in the accepted range: 200,404 00:00:15.646 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:01:07.198 [Pipeline] sh 00:01:07.495 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:01:10.052 [Pipeline] sh 00:01:10.347 + git -C spdk log --oneline -n5 00:01:10.347 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:01:10.347 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:01:10.347 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:01:10.347 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:01:10.347 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:01:10.362 [Pipeline] } 00:01:10.379 [Pipeline] // stage 00:01:10.388 [Pipeline] stage 00:01:10.391 [Pipeline] { (Prepare) 00:01:10.415 [Pipeline] writeFile 00:01:10.437 [Pipeline] sh 00:01:10.730 + logger -p user.info -t JENKINS-CI 00:01:10.744 [Pipeline] sh 00:01:11.034 + logger -p user.info -t JENKINS-CI 00:01:11.048 [Pipeline] sh 00:01:11.338 + cat autorun-spdk.conf 00:01:11.338 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.338 SPDK_TEST_NVMF=1 00:01:11.338 SPDK_TEST_NVME_CLI=1 00:01:11.338 SPDK_TEST_NVMF_NICS=mlx5 00:01:11.338 SPDK_RUN_ASAN=1 00:01:11.338 SPDK_RUN_UBSAN=1 00:01:11.338 NET_TYPE=phy 00:01:11.346 RUN_NIGHTLY=0 00:01:11.350 [Pipeline] readFile 00:01:11.416 [Pipeline] withEnv 00:01:11.422 [Pipeline] { 00:01:11.435 [Pipeline] sh 00:01:11.727 + set -ex 00:01:11.727 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:11.727 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:11.727 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.727 ++ SPDK_TEST_NVMF=1 00:01:11.727 ++ SPDK_TEST_NVME_CLI=1 00:01:11.727 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:11.727 ++ SPDK_RUN_ASAN=1 00:01:11.727 ++ SPDK_RUN_UBSAN=1 00:01:11.727 ++ NET_TYPE=phy 00:01:11.727 ++ RUN_NIGHTLY=0 00:01:11.727 + case $SPDK_TEST_NVMF_NICS in 00:01:11.727 + DRIVERS=mlx5_ib 00:01:11.727 + [[ -n mlx5_ib ]] 00:01:11.727 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.727 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.051 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.051 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.051 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.051 + true 00:01:15.051 + for D in $DRIVERS 00:01:15.051 + sudo modprobe mlx5_ib 00:01:15.311 + exit 0 00:01:15.321 [Pipeline] } 00:01:15.341 [Pipeline] // withEnv 00:01:15.346 [Pipeline] } 00:01:15.362 [Pipeline] // stage 00:01:15.372 [Pipeline] catchError 00:01:15.373 [Pipeline] { 00:01:15.390 [Pipeline] timeout 00:01:15.391 Timeout set to expire in 40 min 00:01:15.392 [Pipeline] { 00:01:15.408 [Pipeline] stage 00:01:15.410 [Pipeline] { (Tests) 00:01:15.423 [Pipeline] sh 00:01:15.710 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.710 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.710 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:15.710 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:15.710 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:15.710 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:15.710 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:15.710 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:15.710 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:15.710 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:15.710 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:15.710 + source /etc/os-release 00:01:15.710 ++ NAME='Fedora Linux' 00:01:15.710 ++ VERSION='38 (Cloud Edition)' 00:01:15.710 ++ ID=fedora 00:01:15.710 ++ VERSION_ID=38 00:01:15.710 ++ VERSION_CODENAME= 00:01:15.710 ++ PLATFORM_ID=platform:f38 00:01:15.710 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.710 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.710 ++ LOGO=fedora-logo-icon 00:01:15.710 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.710 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.710 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.710 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.710 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.710 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.710 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.710 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.710 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.710 ++ SUPPORT_END=2024-05-14 00:01:15.710 ++ VARIANT='Cloud Edition' 00:01:15.710 ++ VARIANT_ID=cloud 00:01:15.710 + uname -a 00:01:15.710 Linux spdk-gp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.710 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:16.646 Hugepages 00:01:16.646 node hugesize free / total 00:01:16.646 node0 1048576kB 0 / 0 00:01:16.646 node0 2048kB 0 / 0 00:01:16.646 node1 1048576kB 0 / 0 00:01:16.908 node1 2048kB 0 / 0 00:01:16.908 00:01:16.908 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.908 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:16.908 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:16.908 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:16.908 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:16.908 + rm -f /tmp/spdk-ld-path 00:01:16.908 + source autorun-spdk.conf 00:01:16.908 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.908 ++ SPDK_TEST_NVMF=1 00:01:16.908 ++ SPDK_TEST_NVME_CLI=1 00:01:16.908 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:16.908 ++ SPDK_RUN_ASAN=1 00:01:16.908 ++ SPDK_RUN_UBSAN=1 00:01:16.908 ++ NET_TYPE=phy 00:01:16.908 ++ RUN_NIGHTLY=0 00:01:16.908 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.908 + [[ -n '' ]] 00:01:16.908 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:16.908 + for M in /var/spdk/build-*-manifest.txt 00:01:16.908 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.908 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:16.908 + for M in /var/spdk/build-*-manifest.txt 00:01:16.908 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.908 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:16.908 ++ uname 00:01:16.908 + [[ Linux == \L\i\n\u\x ]] 00:01:16.908 + sudo dmesg -T 00:01:16.908 + sudo dmesg --clear 00:01:16.908 + dmesg_pid=10026 00:01:16.908 + [[ Fedora Linux == FreeBSD ]] 00:01:16.908 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.908 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.908 + sudo dmesg -Tw 00:01:16.908 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.908 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.908 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.908 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.909 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.909 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.909 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.909 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.909 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.909 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.909 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.909 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.909 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:16.909 Test configuration: 00:01:16.909 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.909 SPDK_TEST_NVMF=1 00:01:16.909 SPDK_TEST_NVME_CLI=1 00:01:16.909 SPDK_TEST_NVMF_NICS=mlx5 00:01:16.909 SPDK_RUN_ASAN=1 00:01:16.909 SPDK_RUN_UBSAN=1 00:01:16.909 NET_TYPE=phy 00:01:16.909 RUN_NIGHTLY=0 14:38:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:16.909 14:38:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.909 14:38:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.909 14:38:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.909 14:38:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.909 14:38:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.909 14:38:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.909 14:38:16 -- paths/export.sh@5 -- $ export PATH 00:01:16.909 14:38:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.909 14:38:16 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:16.909 14:38:16 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:16.909 14:38:16 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714135096.XXXXXX 00:01:16.909 14:38:16 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714135096.89p3is 00:01:16.909 14:38:16 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:16.909 14:38:16 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:16.909 14:38:16 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:17.170 14:38:16 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:17.170 14:38:16 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.170 14:38:16 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:17.170 14:38:16 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:17.170 14:38:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.170 14:38:17 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:17.170 14:38:17 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:17.170 14:38:17 -- pm/common@17 -- $ local monitor 00:01:17.170 14:38:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.170 14:38:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=10060 00:01:17.170 14:38:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.170 14:38:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=10062 00:01:17.170 14:38:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.170 14:38:17 -- pm/common@21 -- $ date +%s 00:01:17.170 14:38:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=10064 00:01:17.170 14:38:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.170 14:38:17 -- pm/common@21 -- $ date +%s 00:01:17.170 14:38:17 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=10067 00:01:17.170 14:38:17 -- pm/common@26 -- $ sleep 1 00:01:17.170 14:38:17 -- pm/common@21 -- $ date +%s 00:01:17.170 14:38:17 -- pm/common@21 -- $ date +%s 00:01:17.170 14:38:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135097 00:01:17.170 14:38:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135097 00:01:17.170 14:38:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135097 00:01:17.170 14:38:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714135097 00:01:17.170 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135097_collect-bmc-pm.bmc.pm.log 00:01:17.170 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135097_collect-vmstat.pm.log 00:01:17.170 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135097_collect-cpu-load.pm.log 00:01:17.170 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714135097_collect-cpu-temp.pm.log 00:01:18.115 14:38:18 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:18.115 14:38:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.115 14:38:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.115 14:38:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:18.115 14:38:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.115 Fri Apr 26 12:38:18 PM UTC 2024 00:01:18.115 14:38:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.115 v24.05-pre-449-g8571999d8 00:01:18.115 14:38:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:18.115 14:38:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:18.115 14:38:18 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:18.115 14:38:18 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:18.115 14:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.115 ************************************ 00:01:18.115 START TEST asan 00:01:18.115 ************************************ 00:01:18.115 14:38:18 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:01:18.115 using asan 00:01:18.115 00:01:18.115 real 0m0.000s 00:01:18.115 user 0m0.000s 00:01:18.115 sys 0m0.000s 00:01:18.115 14:38:18 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:18.115 14:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.115 ************************************ 00:01:18.115 END TEST asan 00:01:18.115 ************************************ 00:01:18.115 14:38:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.115 14:38:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.115 14:38:18 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:18.115 14:38:18 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:18.115 14:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.377 ************************************ 00:01:18.377 START TEST ubsan 00:01:18.377 ************************************ 00:01:18.377 14:38:18 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:18.377 using ubsan 00:01:18.377 00:01:18.377 real 0m0.000s 00:01:18.377 user 0m0.000s 00:01:18.377 sys 0m0.000s 00:01:18.377 14:38:18 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:18.377 14:38:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.377 ************************************ 00:01:18.377 END TEST ubsan 00:01:18.377 ************************************ 00:01:18.377 14:38:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:18.377 14:38:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:18.377 14:38:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:18.377 14:38:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:18.377 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:18.377 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:19.764 Using 'verbs' RDMA provider 00:01:32.934 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.940 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:42.940 Creating mk/config.mk...done. 00:01:42.940 Creating mk/cc.flags.mk...done. 00:01:42.940 Type 'make' to build. 00:01:42.940 14:38:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:42.940 14:38:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:42.940 14:38:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:42.940 14:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.940 ************************************ 00:01:42.940 START TEST make 00:01:42.940 ************************************ 00:01:42.940 14:38:42 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:43.200 make[1]: Nothing to be done for 'all'. 00:01:53.225 The Meson build system 00:01:53.225 Version: 1.3.1 00:01:53.225 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:53.225 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:53.225 Build type: native build 00:01:53.225 Program cat found: YES (/usr/bin/cat) 00:01:53.225 Project name: DPDK 00:01:53.225 Project version: 23.11.0 00:01:53.225 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.225 C linker for the host machine: cc ld.bfd 2.39-16 00:01:53.225 Host machine cpu family: x86_64 00:01:53.225 Host machine cpu: x86_64 00:01:53.225 Message: ## Building in Developer Mode ## 00:01:53.225 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.225 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.225 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.225 Program python3 found: YES (/usr/bin/python3) 00:01:53.225 Program cat found: YES (/usr/bin/cat) 00:01:53.225 Compiler for C supports arguments -march=native: YES 00:01:53.225 Checking for size of "void *" : 8 00:01:53.225 Checking for size of "void *" : 8 (cached) 00:01:53.225 Library m found: YES 00:01:53.225 Library numa found: YES 00:01:53.225 Has header "numaif.h" : YES 00:01:53.225 Library fdt found: NO 00:01:53.225 Library execinfo found: NO 00:01:53.225 Has header "execinfo.h" : YES 00:01:53.225 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.225 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.225 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.225 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.225 Run-time dependency openssl found: YES 3.0.9 00:01:53.225 Run-time dependency libpcap found: YES 1.10.4 00:01:53.225 Has header "pcap.h" with dependency libpcap: YES 00:01:53.225 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.225 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.225 Compiler for C supports arguments -Wformat: YES 00:01:53.225 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.225 Compiler for C supports arguments -Wformat-security: NO 00:01:53.225 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.225 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.225 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.225 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.225 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.225 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.225 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.225 Compiler for C supports arguments -Wundef: YES 00:01:53.225 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.225 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.225 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.225 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.225 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.225 Program objdump found: YES (/usr/bin/objdump) 00:01:53.225 Compiler for C supports arguments -mavx512f: YES 00:01:53.225 Checking if "AVX512 checking" compiles: YES 00:01:53.225 Fetching value of define "__SSE4_2__" : 1 00:01:53.225 Fetching value of define "__AES__" : 1 00:01:53.225 Fetching value of define "__AVX__" : 1 00:01:53.225 Fetching value of define "__AVX2__" : (undefined) 00:01:53.225 Fetching value of define "__AVX512BW__" : (undefined) 00:01:53.225 Fetching value of define "__AVX512CD__" : (undefined) 00:01:53.225 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:53.225 Fetching value of define "__AVX512F__" : (undefined) 00:01:53.225 Fetching value of define "__AVX512VL__" : (undefined) 00:01:53.225 Fetching value of define "__PCLMUL__" : 1 00:01:53.225 Fetching value of define "__RDRND__" : 1 00:01:53.225 Fetching value of define "__RDSEED__" : (undefined) 00:01:53.225 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.225 Fetching value of define "__znver1__" : (undefined) 00:01:53.225 Fetching value of define "__znver2__" : (undefined) 00:01:53.225 Fetching value of define "__znver3__" : (undefined) 00:01:53.225 Fetching value of define "__znver4__" : (undefined) 00:01:53.225 Library asan found: YES 00:01:53.225 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.225 Message: lib/log: Defining dependency "log" 00:01:53.225 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.225 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.226 Library rt found: YES 00:01:53.226 Checking for function "getentropy" : NO 00:01:53.226 Message: lib/eal: Defining dependency "eal" 00:01:53.226 Message: lib/ring: Defining dependency "ring" 00:01:53.226 Message: lib/rcu: Defining dependency "rcu" 00:01:53.226 Message: lib/mempool: Defining dependency "mempool" 00:01:53.226 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.226 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.226 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.226 Compiler for C supports arguments -mpclmul: YES 00:01:53.226 Compiler for C supports arguments -maes: YES 00:01:53.226 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.226 Compiler for C supports arguments -mavx512bw: YES 00:01:53.226 Compiler for C supports arguments -mavx512dq: YES 00:01:53.226 Compiler for C supports arguments -mavx512vl: YES 00:01:53.226 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.226 Compiler for C supports arguments -mavx2: YES 00:01:53.226 Compiler for C supports arguments -mavx: YES 00:01:53.226 Message: lib/net: Defining dependency "net" 00:01:53.226 Message: lib/meter: Defining dependency "meter" 00:01:53.226 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.226 Message: lib/pci: Defining dependency "pci" 00:01:53.226 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.226 Message: lib/hash: Defining dependency "hash" 00:01:53.226 Message: lib/timer: Defining dependency "timer" 00:01:53.226 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.226 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.226 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.226 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.226 Message: lib/power: Defining dependency "power" 00:01:53.226 Message: lib/reorder: Defining dependency "reorder" 00:01:53.226 Message: lib/security: Defining dependency "security" 00:01:53.226 Has header "linux/userfaultfd.h" : YES 00:01:53.226 Has header "linux/vduse.h" : YES 00:01:53.226 Message: lib/vhost: Defining dependency "vhost" 00:01:53.226 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.226 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.226 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.226 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.226 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.226 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.226 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.226 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.226 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.226 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.226 Program doxygen found: YES (/usr/bin/doxygen) 00:01:53.226 Configuring doxy-api-html.conf using configuration 00:01:53.226 Configuring doxy-api-man.conf using configuration 00:01:53.226 Program mandb found: YES (/usr/bin/mandb) 00:01:53.226 Program sphinx-build found: NO 00:01:53.226 Configuring rte_build_config.h using configuration 00:01:53.226 Message: 00:01:53.226 ================= 00:01:53.226 Applications Enabled 00:01:53.226 ================= 00:01:53.226 00:01:53.226 apps: 00:01:53.226 00:01:53.226 00:01:53.226 Message: 00:01:53.226 ================= 00:01:53.226 Libraries Enabled 00:01:53.226 ================= 00:01:53.226 00:01:53.226 libs: 00:01:53.226 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.226 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.226 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.226 00:01:53.226 Message: 00:01:53.226 =============== 00:01:53.226 Drivers Enabled 00:01:53.226 =============== 00:01:53.226 00:01:53.226 common: 00:01:53.226 00:01:53.226 bus: 00:01:53.226 pci, vdev, 00:01:53.226 mempool: 00:01:53.226 ring, 00:01:53.226 dma: 00:01:53.226 00:01:53.226 net: 00:01:53.226 00:01:53.226 crypto: 00:01:53.226 00:01:53.226 compress: 00:01:53.226 00:01:53.226 vdpa: 00:01:53.226 00:01:53.226 00:01:53.226 Message: 00:01:53.226 ================= 00:01:53.226 Content Skipped 00:01:53.226 ================= 00:01:53.226 00:01:53.226 apps: 00:01:53.226 dumpcap: explicitly disabled via build config 00:01:53.226 graph: explicitly disabled via build config 00:01:53.226 pdump: explicitly disabled via build config 00:01:53.226 proc-info: explicitly disabled via build config 00:01:53.226 test-acl: explicitly disabled via build config 00:01:53.226 test-bbdev: explicitly disabled via build config 00:01:53.226 test-cmdline: explicitly disabled via build config 00:01:53.226 test-compress-perf: explicitly disabled via build config 00:01:53.226 test-crypto-perf: explicitly disabled via build config 00:01:53.226 test-dma-perf: explicitly disabled via build config 00:01:53.226 test-eventdev: explicitly disabled via build config 00:01:53.226 test-fib: explicitly disabled via build config 00:01:53.226 test-flow-perf: explicitly disabled via build config 00:01:53.226 test-gpudev: explicitly disabled via build config 00:01:53.226 test-mldev: explicitly disabled via build config 00:01:53.226 test-pipeline: explicitly disabled via build config 00:01:53.226 test-pmd: explicitly disabled via build config 00:01:53.226 test-regex: explicitly disabled via build config 00:01:53.226 test-sad: explicitly disabled via build config 00:01:53.226 test-security-perf: explicitly disabled via build config 00:01:53.226 00:01:53.226 libs: 00:01:53.226 metrics: explicitly disabled via build config 00:01:53.226 acl: explicitly disabled via build config 00:01:53.226 bbdev: explicitly disabled via build config 00:01:53.226 bitratestats: explicitly disabled via build config 00:01:53.226 bpf: explicitly disabled via build config 00:01:53.226 cfgfile: explicitly disabled via build config 00:01:53.226 distributor: explicitly disabled via build config 00:01:53.226 efd: explicitly disabled via build config 00:01:53.226 eventdev: explicitly disabled via build config 00:01:53.226 dispatcher: explicitly disabled via build config 00:01:53.226 gpudev: explicitly disabled via build config 00:01:53.226 gro: explicitly disabled via build config 00:01:53.226 gso: explicitly disabled via build config 00:01:53.226 ip_frag: explicitly disabled via build config 00:01:53.226 jobstats: explicitly disabled via build config 00:01:53.226 latencystats: explicitly disabled via build config 00:01:53.226 lpm: explicitly disabled via build config 00:01:53.226 member: explicitly disabled via build config 00:01:53.226 pcapng: explicitly disabled via build config 00:01:53.226 rawdev: explicitly disabled via build config 00:01:53.226 regexdev: explicitly disabled via build config 00:01:53.226 mldev: explicitly disabled via build config 00:01:53.226 rib: explicitly disabled via build config 00:01:53.226 sched: explicitly disabled via build config 00:01:53.226 stack: explicitly disabled via build config 00:01:53.226 ipsec: explicitly disabled via build config 00:01:53.226 pdcp: explicitly disabled via build config 00:01:53.226 fib: explicitly disabled via build config 00:01:53.226 port: explicitly disabled via build config 00:01:53.226 pdump: explicitly disabled via build config 00:01:53.226 table: explicitly disabled via build config 00:01:53.226 pipeline: explicitly disabled via build config 00:01:53.226 graph: explicitly disabled via build config 00:01:53.226 node: explicitly disabled via build config 00:01:53.226 00:01:53.226 drivers: 00:01:53.226 common/cpt: not in enabled drivers build config 00:01:53.226 common/dpaax: not in enabled drivers build config 00:01:53.226 common/iavf: not in enabled drivers build config 00:01:53.226 common/idpf: not in enabled drivers build config 00:01:53.226 common/mvep: not in enabled drivers build config 00:01:53.226 common/octeontx: not in enabled drivers build config 00:01:53.226 bus/auxiliary: not in enabled drivers build config 00:01:53.226 bus/cdx: not in enabled drivers build config 00:01:53.226 bus/dpaa: not in enabled drivers build config 00:01:53.226 bus/fslmc: not in enabled drivers build config 00:01:53.226 bus/ifpga: not in enabled drivers build config 00:01:53.226 bus/platform: not in enabled drivers build config 00:01:53.226 bus/vmbus: not in enabled drivers build config 00:01:53.226 common/cnxk: not in enabled drivers build config 00:01:53.226 common/mlx5: not in enabled drivers build config 00:01:53.226 common/nfp: not in enabled drivers build config 00:01:53.226 common/qat: not in enabled drivers build config 00:01:53.226 common/sfc_efx: not in enabled drivers build config 00:01:53.226 mempool/bucket: not in enabled drivers build config 00:01:53.226 mempool/cnxk: not in enabled drivers build config 00:01:53.226 mempool/dpaa: not in enabled drivers build config 00:01:53.226 mempool/dpaa2: not in enabled drivers build config 00:01:53.226 mempool/octeontx: not in enabled drivers build config 00:01:53.226 mempool/stack: not in enabled drivers build config 00:01:53.226 dma/cnxk: not in enabled drivers build config 00:01:53.226 dma/dpaa: not in enabled drivers build config 00:01:53.226 dma/dpaa2: not in enabled drivers build config 00:01:53.226 dma/hisilicon: not in enabled drivers build config 00:01:53.226 dma/idxd: not in enabled drivers build config 00:01:53.226 dma/ioat: not in enabled drivers build config 00:01:53.226 dma/skeleton: not in enabled drivers build config 00:01:53.226 net/af_packet: not in enabled drivers build config 00:01:53.226 net/af_xdp: not in enabled drivers build config 00:01:53.226 net/ark: not in enabled drivers build config 00:01:53.226 net/atlantic: not in enabled drivers build config 00:01:53.226 net/avp: not in enabled drivers build config 00:01:53.226 net/axgbe: not in enabled drivers build config 00:01:53.226 net/bnx2x: not in enabled drivers build config 00:01:53.226 net/bnxt: not in enabled drivers build config 00:01:53.226 net/bonding: not in enabled drivers build config 00:01:53.226 net/cnxk: not in enabled drivers build config 00:01:53.226 net/cpfl: not in enabled drivers build config 00:01:53.226 net/cxgbe: not in enabled drivers build config 00:01:53.226 net/dpaa: not in enabled drivers build config 00:01:53.226 net/dpaa2: not in enabled drivers build config 00:01:53.226 net/e1000: not in enabled drivers build config 00:01:53.226 net/ena: not in enabled drivers build config 00:01:53.226 net/enetc: not in enabled drivers build config 00:01:53.226 net/enetfec: not in enabled drivers build config 00:01:53.226 net/enic: not in enabled drivers build config 00:01:53.227 net/failsafe: not in enabled drivers build config 00:01:53.227 net/fm10k: not in enabled drivers build config 00:01:53.227 net/gve: not in enabled drivers build config 00:01:53.227 net/hinic: not in enabled drivers build config 00:01:53.227 net/hns3: not in enabled drivers build config 00:01:53.227 net/i40e: not in enabled drivers build config 00:01:53.227 net/iavf: not in enabled drivers build config 00:01:53.227 net/ice: not in enabled drivers build config 00:01:53.227 net/idpf: not in enabled drivers build config 00:01:53.227 net/igc: not in enabled drivers build config 00:01:53.227 net/ionic: not in enabled drivers build config 00:01:53.227 net/ipn3ke: not in enabled drivers build config 00:01:53.227 net/ixgbe: not in enabled drivers build config 00:01:53.227 net/mana: not in enabled drivers build config 00:01:53.227 net/memif: not in enabled drivers build config 00:01:53.227 net/mlx4: not in enabled drivers build config 00:01:53.227 net/mlx5: not in enabled drivers build config 00:01:53.227 net/mvneta: not in enabled drivers build config 00:01:53.227 net/mvpp2: not in enabled drivers build config 00:01:53.227 net/netvsc: not in enabled drivers build config 00:01:53.227 net/nfb: not in enabled drivers build config 00:01:53.227 net/nfp: not in enabled drivers build config 00:01:53.227 net/ngbe: not in enabled drivers build config 00:01:53.227 net/null: not in enabled drivers build config 00:01:53.227 net/octeontx: not in enabled drivers build config 00:01:53.227 net/octeon_ep: not in enabled drivers build config 00:01:53.227 net/pcap: not in enabled drivers build config 00:01:53.227 net/pfe: not in enabled drivers build config 00:01:53.227 net/qede: not in enabled drivers build config 00:01:53.227 net/ring: not in enabled drivers build config 00:01:53.227 net/sfc: not in enabled drivers build config 00:01:53.227 net/softnic: not in enabled drivers build config 00:01:53.227 net/tap: not in enabled drivers build config 00:01:53.227 net/thunderx: not in enabled drivers build config 00:01:53.227 net/txgbe: not in enabled drivers build config 00:01:53.227 net/vdev_netvsc: not in enabled drivers build config 00:01:53.227 net/vhost: not in enabled drivers build config 00:01:53.227 net/virtio: not in enabled drivers build config 00:01:53.227 net/vmxnet3: not in enabled drivers build config 00:01:53.227 raw/*: missing internal dependency, "rawdev" 00:01:53.227 crypto/armv8: not in enabled drivers build config 00:01:53.227 crypto/bcmfs: not in enabled drivers build config 00:01:53.227 crypto/caam_jr: not in enabled drivers build config 00:01:53.227 crypto/ccp: not in enabled drivers build config 00:01:53.227 crypto/cnxk: not in enabled drivers build config 00:01:53.227 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.227 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.227 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.227 crypto/mlx5: not in enabled drivers build config 00:01:53.227 crypto/mvsam: not in enabled drivers build config 00:01:53.227 crypto/nitrox: not in enabled drivers build config 00:01:53.227 crypto/null: not in enabled drivers build config 00:01:53.227 crypto/octeontx: not in enabled drivers build config 00:01:53.227 crypto/openssl: not in enabled drivers build config 00:01:53.227 crypto/scheduler: not in enabled drivers build config 00:01:53.227 crypto/uadk: not in enabled drivers build config 00:01:53.227 crypto/virtio: not in enabled drivers build config 00:01:53.227 compress/isal: not in enabled drivers build config 00:01:53.227 compress/mlx5: not in enabled drivers build config 00:01:53.227 compress/octeontx: not in enabled drivers build config 00:01:53.227 compress/zlib: not in enabled drivers build config 00:01:53.227 regex/*: missing internal dependency, "regexdev" 00:01:53.227 ml/*: missing internal dependency, "mldev" 00:01:53.227 vdpa/ifc: not in enabled drivers build config 00:01:53.227 vdpa/mlx5: not in enabled drivers build config 00:01:53.227 vdpa/nfp: not in enabled drivers build config 00:01:53.227 vdpa/sfc: not in enabled drivers build config 00:01:53.227 event/*: missing internal dependency, "eventdev" 00:01:53.227 baseband/*: missing internal dependency, "bbdev" 00:01:53.227 gpu/*: missing internal dependency, "gpudev" 00:01:53.227 00:01:53.227 00:01:53.227 Build targets in project: 85 00:01:53.227 00:01:53.227 DPDK 23.11.0 00:01:53.227 00:01:53.227 User defined options 00:01:53.227 buildtype : debug 00:01:53.227 default_library : shared 00:01:53.227 libdir : lib 00:01:53.227 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:53.227 b_sanitize : address 00:01:53.227 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.227 c_link_args : 00:01:53.227 cpu_instruction_set: native 00:01:53.227 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:53.227 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:53.227 enable_docs : false 00:01:53.227 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:53.227 enable_kmods : false 00:01:53.227 tests : false 00:01:53.227 00:01:53.227 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.227 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:53.227 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.227 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.227 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.227 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.227 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.227 [6/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.227 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.227 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.227 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.227 [10/265] Linking static target lib/librte_kvargs.a 00:01:53.227 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.227 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.227 [13/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.227 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.227 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.227 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.227 [17/265] Linking static target lib/librte_log.a 00:01:53.227 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.227 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.227 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.227 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.488 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.752 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.752 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.752 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.753 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.753 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.753 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.753 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.753 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.753 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.753 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.753 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.753 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.753 [35/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.753 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.753 [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.753 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.753 [39/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.753 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.753 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.753 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.753 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.753 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.753 [45/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.753 [46/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.753 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.753 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.753 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.753 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.753 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.753 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.753 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.753 [54/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:54.018 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.018 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.018 [57/265] Linking static target lib/librte_telemetry.a 00:01:54.018 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.018 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.018 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.018 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.018 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.018 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:54.018 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:54.018 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:54.018 [66/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:54.018 [67/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.018 [68/265] Linking static target lib/librte_pci.a 00:01:54.018 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:54.283 [70/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:54.283 [71/265] Linking target lib/librte_log.so.24.0 00:01:54.283 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.283 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.283 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.283 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.283 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:54.283 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:54.548 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:54.548 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:54.548 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.548 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:54.548 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:54.548 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:54.548 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:54.548 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.548 [86/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:54.548 [87/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.548 [88/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.548 [89/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.548 [90/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.812 [91/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:54.812 [92/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:54.812 [93/265] Linking static target lib/librte_ring.a 00:01:54.812 [94/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:54.812 [95/265] Linking target lib/librte_kvargs.so.24.0 00:01:54.812 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.812 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.812 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.812 [99/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.812 [100/265] Linking static target lib/librte_meter.a 00:01:54.812 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.813 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.813 [103/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.813 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.813 [105/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.813 [106/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.813 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.813 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.813 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.075 [110/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.075 [111/265] Linking target lib/librte_telemetry.so.24.0 00:01:55.075 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.075 [113/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:55.075 [114/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.075 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.075 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.075 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.075 [118/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.075 [119/265] Linking static target lib/librte_mempool.a 00:01:55.075 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.341 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.341 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.341 [123/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:55.341 [124/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.341 [125/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.341 [126/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.341 [127/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.341 [128/265] Linking static target lib/librte_rcu.a 00:01:55.341 [129/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.341 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.341 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.341 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.341 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.341 [134/265] Linking static target lib/librte_cmdline.a 00:01:55.341 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.607 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.607 [137/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.607 [138/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.607 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.607 [140/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.607 [141/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.607 [142/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.871 [143/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.871 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.871 [145/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.871 [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.871 [147/265] Linking static target lib/librte_timer.a 00:01:55.871 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.871 [149/265] Linking static target lib/librte_eal.a 00:01:55.871 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.871 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.871 [152/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.871 [153/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.871 [154/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.133 [155/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.133 [156/265] Linking static target lib/librte_dmadev.a 00:01:56.133 [157/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.133 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.133 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.393 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.393 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.393 [162/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.393 [163/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.393 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.393 [165/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:56.393 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.393 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.393 [168/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.393 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:56.393 [170/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.393 [171/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.393 [172/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.393 [173/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.393 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.393 [175/265] Linking static target lib/librte_net.a 00:01:56.393 [176/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.393 [177/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.393 [178/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:56.652 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.652 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.652 [181/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.652 [182/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.652 [183/265] Linking static target lib/librte_power.a 00:01:56.652 [184/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.912 [185/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.912 [186/265] Linking static target lib/librte_compressdev.a 00:01:56.912 [187/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.912 [188/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.912 [189/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.912 [190/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.912 [191/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.912 [192/265] Linking static target drivers/librte_bus_vdev.a 00:01:56.912 [193/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.912 [194/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.912 [195/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.913 [196/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.913 [197/265] Linking static target lib/librte_hash.a 00:01:56.913 [198/265] Linking static target drivers/librte_bus_pci.a 00:01:57.172 [199/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.172 [200/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.172 [201/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.172 [202/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.172 [203/265] Linking static target lib/librte_reorder.a 00:01:57.172 [204/265] Linking static target drivers/librte_mempool_ring.a 00:01:57.172 [205/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.172 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.172 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.172 [208/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.172 [209/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [210/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.432 [212/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.691 [213/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.691 [214/265] Linking static target lib/librte_security.a 00:01:58.258 [215/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.518 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.086 [217/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.086 [218/265] Linking static target lib/librte_mbuf.a 00:01:59.345 [219/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.345 [220/265] Linking static target lib/librte_cryptodev.a 00:01:59.603 [221/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.171 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.171 [223/265] Linking static target lib/librte_ethdev.a 00:02:00.171 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.548 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.548 [226/265] Linking target lib/librte_eal.so.24.0 00:02:01.806 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:01.806 [228/265] Linking target lib/librte_pci.so.24.0 00:02:01.806 [229/265] Linking target lib/librte_timer.so.24.0 00:02:01.806 [230/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:01.806 [231/265] Linking target lib/librte_meter.so.24.0 00:02:01.806 [232/265] Linking target lib/librte_ring.so.24.0 00:02:01.806 [233/265] Linking target lib/librte_dmadev.so.24.0 00:02:02.065 [234/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.065 [235/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.065 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.065 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.065 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.065 [239/265] Linking target lib/librte_rcu.so.24.0 00:02:02.065 [240/265] Linking target lib/librte_mempool.so.24.0 00:02:02.065 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:02.065 [242/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:02.065 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:02.065 [244/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:02.065 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:02.323 [246/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:02.323 [247/265] Linking target lib/librte_reorder.so.24.0 00:02:02.323 [248/265] Linking target lib/librte_compressdev.so.24.0 00:02:02.323 [249/265] Linking target lib/librte_net.so.24.0 00:02:02.323 [250/265] Linking target lib/librte_cryptodev.so.24.0 00:02:02.582 [251/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:02.582 [252/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:02.582 [253/265] Linking target lib/librte_hash.so.24.0 00:02:02.582 [254/265] Linking target lib/librte_cmdline.so.24.0 00:02:02.582 [255/265] Linking target lib/librte_security.so.24.0 00:02:02.582 [256/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:02.840 [257/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.215 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.215 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:04.473 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:04.473 [261/265] Linking target lib/librte_power.so.24.0 00:02:26.398 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.398 [263/265] Linking static target lib/librte_vhost.a 00:02:27.779 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.779 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:27.779 INFO: autodetecting backend as ninja 00:02:27.779 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:28.715 CC lib/ut_mock/mock.o 00:02:28.715 CC lib/ut/ut.o 00:02:28.715 CC lib/log/log.o 00:02:28.715 CC lib/log/log_flags.o 00:02:28.715 CC lib/log/log_deprecated.o 00:02:28.715 LIB libspdk_ut_mock.a 00:02:28.715 SO libspdk_ut_mock.so.6.0 00:02:28.715 LIB libspdk_log.a 00:02:28.715 LIB libspdk_ut.a 00:02:28.715 SO libspdk_log.so.7.0 00:02:28.715 SO libspdk_ut.so.2.0 00:02:28.715 SYMLINK libspdk_ut_mock.so 00:02:28.715 SYMLINK libspdk_ut.so 00:02:28.715 SYMLINK libspdk_log.so 00:02:28.974 CC lib/ioat/ioat.o 00:02:28.974 CC lib/dma/dma.o 00:02:28.974 CXX lib/trace_parser/trace.o 00:02:28.974 CC lib/util/base64.o 00:02:28.974 CC lib/util/bit_array.o 00:02:28.974 CC lib/util/cpuset.o 00:02:28.974 CC lib/util/crc16.o 00:02:28.974 CC lib/util/crc32.o 00:02:28.974 CC lib/util/crc32c.o 00:02:28.974 CC lib/util/crc32_ieee.o 00:02:28.974 CC lib/util/crc64.o 00:02:28.974 CC lib/util/dif.o 00:02:28.974 CC lib/util/fd.o 00:02:28.974 CC lib/util/file.o 00:02:28.974 CC lib/util/hexlify.o 00:02:28.974 CC lib/util/iov.o 00:02:28.974 CC lib/util/math.o 00:02:28.974 CC lib/util/pipe.o 00:02:28.974 CC lib/util/strerror_tls.o 00:02:28.974 CC lib/util/string.o 00:02:28.974 CC lib/util/uuid.o 00:02:28.974 CC lib/util/fd_group.o 00:02:28.974 CC lib/util/xor.o 00:02:28.974 CC lib/util/zipf.o 00:02:28.974 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.974 CC lib/vfio_user/host/vfio_user.o 00:02:29.232 LIB libspdk_dma.a 00:02:29.232 SO libspdk_dma.so.4.0 00:02:29.232 LIB libspdk_ioat.a 00:02:29.232 SYMLINK libspdk_dma.so 00:02:29.491 SO libspdk_ioat.so.7.0 00:02:29.491 SYMLINK libspdk_ioat.so 00:02:29.491 LIB libspdk_vfio_user.a 00:02:29.491 SO libspdk_vfio_user.so.5.0 00:02:29.491 SYMLINK libspdk_vfio_user.so 00:02:29.751 LIB libspdk_util.a 00:02:29.751 SO libspdk_util.so.9.0 00:02:30.009 SYMLINK libspdk_util.so 00:02:30.009 CC lib/conf/conf.o 00:02:30.009 CC lib/idxd/idxd.o 00:02:30.009 CC lib/idxd/idxd_user.o 00:02:30.009 CC lib/rdma/common.o 00:02:30.009 CC lib/rdma/rdma_verbs.o 00:02:30.009 CC lib/env_dpdk/env.o 00:02:30.009 CC lib/env_dpdk/memory.o 00:02:30.009 CC lib/env_dpdk/pci.o 00:02:30.009 CC lib/env_dpdk/init.o 00:02:30.009 CC lib/json/json_parse.o 00:02:30.009 CC lib/vmd/vmd.o 00:02:30.009 CC lib/env_dpdk/threads.o 00:02:30.009 CC lib/json/json_util.o 00:02:30.009 CC lib/vmd/led.o 00:02:30.009 CC lib/env_dpdk/pci_ioat.o 00:02:30.009 CC lib/json/json_write.o 00:02:30.009 CC lib/env_dpdk/pci_virtio.o 00:02:30.009 CC lib/env_dpdk/pci_vmd.o 00:02:30.009 CC lib/env_dpdk/pci_idxd.o 00:02:30.009 CC lib/env_dpdk/pci_event.o 00:02:30.009 CC lib/env_dpdk/sigbus_handler.o 00:02:30.009 CC lib/env_dpdk/pci_dpdk.o 00:02:30.009 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.009 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:30.267 LIB libspdk_conf.a 00:02:30.525 SO libspdk_conf.so.6.0 00:02:30.525 LIB libspdk_rdma.a 00:02:30.525 SYMLINK libspdk_conf.so 00:02:30.525 LIB libspdk_json.a 00:02:30.525 SO libspdk_rdma.so.6.0 00:02:30.525 SO libspdk_json.so.6.0 00:02:30.525 SYMLINK libspdk_rdma.so 00:02:30.525 SYMLINK libspdk_json.so 00:02:30.784 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.784 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.784 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.784 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.784 LIB libspdk_idxd.a 00:02:31.042 SO libspdk_idxd.so.12.0 00:02:31.042 LIB libspdk_trace_parser.a 00:02:31.042 SYMLINK libspdk_idxd.so 00:02:31.042 SO libspdk_trace_parser.so.5.0 00:02:31.042 LIB libspdk_vmd.a 00:02:31.042 LIB libspdk_jsonrpc.a 00:02:31.042 SO libspdk_vmd.so.6.0 00:02:31.042 SO libspdk_jsonrpc.so.6.0 00:02:31.042 SYMLINK libspdk_trace_parser.so 00:02:31.042 SYMLINK libspdk_vmd.so 00:02:31.042 SYMLINK libspdk_jsonrpc.so 00:02:31.301 CC lib/rpc/rpc.o 00:02:31.560 LIB libspdk_rpc.a 00:02:31.560 SO libspdk_rpc.so.6.0 00:02:31.560 SYMLINK libspdk_rpc.so 00:02:31.819 CC lib/notify/notify.o 00:02:31.819 CC lib/keyring/keyring.o 00:02:31.819 CC lib/trace/trace.o 00:02:31.819 CC lib/notify/notify_rpc.o 00:02:31.819 CC lib/keyring/keyring_rpc.o 00:02:31.819 CC lib/trace/trace_flags.o 00:02:31.819 CC lib/trace/trace_rpc.o 00:02:32.077 LIB libspdk_notify.a 00:02:32.077 SO libspdk_notify.so.6.0 00:02:32.077 SYMLINK libspdk_notify.so 00:02:32.077 LIB libspdk_keyring.a 00:02:32.077 LIB libspdk_trace.a 00:02:32.077 SO libspdk_keyring.so.1.0 00:02:32.077 SO libspdk_trace.so.10.0 00:02:32.077 SYMLINK libspdk_keyring.so 00:02:32.077 SYMLINK libspdk_trace.so 00:02:32.336 CC lib/sock/sock.o 00:02:32.336 CC lib/sock/sock_rpc.o 00:02:32.336 CC lib/thread/thread.o 00:02:32.336 CC lib/thread/iobuf.o 00:02:32.902 LIB libspdk_sock.a 00:02:32.902 SO libspdk_sock.so.9.0 00:02:32.902 SYMLINK libspdk_sock.so 00:02:32.902 LIB libspdk_env_dpdk.a 00:02:32.902 SO libspdk_env_dpdk.so.14.0 00:02:33.161 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:33.161 CC lib/nvme/nvme_ctrlr.o 00:02:33.161 CC lib/nvme/nvme_fabric.o 00:02:33.161 CC lib/nvme/nvme_ns_cmd.o 00:02:33.161 CC lib/nvme/nvme_ns.o 00:02:33.161 CC lib/nvme/nvme_pcie_common.o 00:02:33.161 CC lib/nvme/nvme_pcie.o 00:02:33.161 CC lib/nvme/nvme_qpair.o 00:02:33.161 CC lib/nvme/nvme.o 00:02:33.161 CC lib/nvme/nvme_quirks.o 00:02:33.161 CC lib/nvme/nvme_transport.o 00:02:33.161 CC lib/nvme/nvme_discovery.o 00:02:33.161 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.161 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.161 CC lib/nvme/nvme_tcp.o 00:02:33.161 CC lib/nvme/nvme_opal.o 00:02:33.161 CC lib/nvme/nvme_io_msg.o 00:02:33.161 CC lib/nvme/nvme_poll_group.o 00:02:33.161 CC lib/nvme/nvme_zns.o 00:02:33.161 CC lib/nvme/nvme_stubs.o 00:02:33.161 CC lib/nvme/nvme_auth.o 00:02:33.161 CC lib/nvme/nvme_cuse.o 00:02:33.161 CC lib/nvme/nvme_rdma.o 00:02:33.161 SYMLINK libspdk_env_dpdk.so 00:02:34.540 LIB libspdk_thread.a 00:02:34.540 SO libspdk_thread.so.10.0 00:02:34.540 SYMLINK libspdk_thread.so 00:02:34.540 CC lib/accel/accel.o 00:02:34.540 CC lib/blob/blobstore.o 00:02:34.540 CC lib/accel/accel_rpc.o 00:02:34.540 CC lib/init/json_config.o 00:02:34.540 CC lib/virtio/virtio.o 00:02:34.540 CC lib/init/subsystem.o 00:02:34.540 CC lib/accel/accel_sw.o 00:02:34.540 CC lib/blob/request.o 00:02:34.540 CC lib/virtio/virtio_vhost_user.o 00:02:34.540 CC lib/init/subsystem_rpc.o 00:02:34.540 CC lib/blob/zeroes.o 00:02:34.540 CC lib/virtio/virtio_vfio_user.o 00:02:34.540 CC lib/init/rpc.o 00:02:34.540 CC lib/blob/blob_bs_dev.o 00:02:34.540 CC lib/virtio/virtio_pci.o 00:02:34.799 LIB libspdk_init.a 00:02:35.058 SO libspdk_init.so.5.0 00:02:35.058 SYMLINK libspdk_init.so 00:02:35.058 LIB libspdk_virtio.a 00:02:35.058 SO libspdk_virtio.so.7.0 00:02:35.058 SYMLINK libspdk_virtio.so 00:02:35.058 CC lib/event/app.o 00:02:35.058 CC lib/event/reactor.o 00:02:35.058 CC lib/event/log_rpc.o 00:02:35.058 CC lib/event/app_rpc.o 00:02:35.058 CC lib/event/scheduler_static.o 00:02:35.626 LIB libspdk_event.a 00:02:35.626 SO libspdk_event.so.13.0 00:02:35.884 SYMLINK libspdk_event.so 00:02:35.884 LIB libspdk_nvme.a 00:02:35.884 LIB libspdk_accel.a 00:02:35.884 SO libspdk_accel.so.15.0 00:02:36.142 SO libspdk_nvme.so.13.0 00:02:36.142 SYMLINK libspdk_accel.so 00:02:36.142 CC lib/bdev/bdev.o 00:02:36.142 CC lib/bdev/bdev_rpc.o 00:02:36.142 CC lib/bdev/bdev_zone.o 00:02:36.142 CC lib/bdev/part.o 00:02:36.142 CC lib/bdev/scsi_nvme.o 00:02:36.399 SYMLINK libspdk_nvme.so 00:02:38.327 LIB libspdk_blob.a 00:02:38.327 SO libspdk_blob.so.11.0 00:02:38.586 SYMLINK libspdk_blob.so 00:02:38.586 CC lib/blobfs/blobfs.o 00:02:38.586 CC lib/blobfs/tree.o 00:02:38.586 CC lib/lvol/lvol.o 00:02:39.520 LIB libspdk_bdev.a 00:02:39.520 SO libspdk_bdev.so.15.0 00:02:39.787 SYMLINK libspdk_bdev.so 00:02:39.787 LIB libspdk_blobfs.a 00:02:39.787 SO libspdk_blobfs.so.10.0 00:02:39.787 SYMLINK libspdk_blobfs.so 00:02:39.787 LIB libspdk_lvol.a 00:02:39.787 CC lib/ublk/ublk.o 00:02:39.787 CC lib/nbd/nbd.o 00:02:39.787 CC lib/ublk/ublk_rpc.o 00:02:39.787 CC lib/nbd/nbd_rpc.o 00:02:39.787 SO libspdk_lvol.so.10.0 00:02:39.787 CC lib/nvmf/ctrlr.o 00:02:39.787 CC lib/nvmf/ctrlr_discovery.o 00:02:39.787 CC lib/scsi/dev.o 00:02:39.787 CC lib/ftl/ftl_core.o 00:02:39.787 CC lib/nvmf/ctrlr_bdev.o 00:02:39.787 CC lib/scsi/lun.o 00:02:39.787 CC lib/scsi/port.o 00:02:39.787 CC lib/nvmf/subsystem.o 00:02:39.787 CC lib/scsi/scsi.o 00:02:39.787 CC lib/ftl/ftl_init.o 00:02:39.787 CC lib/ftl/ftl_layout.o 00:02:39.787 CC lib/nvmf/nvmf.o 00:02:39.787 CC lib/scsi/scsi_bdev.o 00:02:39.787 CC lib/ftl/ftl_debug.o 00:02:39.787 CC lib/nvmf/nvmf_rpc.o 00:02:39.787 CC lib/scsi/scsi_pr.o 00:02:39.787 CC lib/scsi/scsi_rpc.o 00:02:39.787 CC lib/ftl/ftl_io.o 00:02:39.787 CC lib/nvmf/transport.o 00:02:39.787 CC lib/ftl/ftl_sb.o 00:02:39.787 CC lib/nvmf/tcp.o 00:02:39.787 CC lib/scsi/task.o 00:02:39.787 CC lib/nvmf/rdma.o 00:02:39.787 CC lib/ftl/ftl_l2p.o 00:02:39.787 CC lib/ftl/ftl_l2p_flat.o 00:02:39.787 CC lib/ftl/ftl_nv_cache.o 00:02:39.787 CC lib/ftl/ftl_band.o 00:02:39.787 CC lib/ftl/ftl_band_ops.o 00:02:39.787 CC lib/ftl/ftl_writer.o 00:02:39.787 CC lib/ftl/ftl_rq.o 00:02:39.787 CC lib/ftl/ftl_reloc.o 00:02:39.787 CC lib/ftl/ftl_l2p_cache.o 00:02:39.787 CC lib/ftl/ftl_p2l.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:39.787 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:40.047 SYMLINK libspdk_lvol.so 00:02:40.047 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:40.309 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:40.310 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:40.310 CC lib/ftl/utils/ftl_conf.o 00:02:40.310 CC lib/ftl/utils/ftl_md.o 00:02:40.310 CC lib/ftl/utils/ftl_mempool.o 00:02:40.310 CC lib/ftl/utils/ftl_bitmap.o 00:02:40.310 CC lib/ftl/utils/ftl_property.o 00:02:40.310 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:40.310 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:40.310 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:40.310 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:40.310 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:40.310 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:40.310 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:40.310 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:40.310 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:40.310 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:40.310 CC lib/ftl/base/ftl_base_dev.o 00:02:40.310 CC lib/ftl/base/ftl_base_bdev.o 00:02:40.310 CC lib/ftl/ftl_trace.o 00:02:40.879 LIB libspdk_nbd.a 00:02:40.880 SO libspdk_nbd.so.7.0 00:02:40.880 SYMLINK libspdk_nbd.so 00:02:40.880 LIB libspdk_scsi.a 00:02:40.880 SO libspdk_scsi.so.9.0 00:02:41.138 LIB libspdk_ublk.a 00:02:41.138 SO libspdk_ublk.so.3.0 00:02:41.138 SYMLINK libspdk_scsi.so 00:02:41.138 SYMLINK libspdk_ublk.so 00:02:41.138 CC lib/vhost/vhost.o 00:02:41.138 CC lib/iscsi/conn.o 00:02:41.138 CC lib/vhost/vhost_rpc.o 00:02:41.138 CC lib/iscsi/init_grp.o 00:02:41.138 CC lib/iscsi/iscsi.o 00:02:41.138 CC lib/vhost/vhost_scsi.o 00:02:41.138 CC lib/iscsi/md5.o 00:02:41.138 CC lib/vhost/vhost_blk.o 00:02:41.138 CC lib/iscsi/param.o 00:02:41.138 CC lib/vhost/rte_vhost_user.o 00:02:41.138 CC lib/iscsi/portal_grp.o 00:02:41.138 CC lib/iscsi/tgt_node.o 00:02:41.138 CC lib/iscsi/iscsi_subsystem.o 00:02:41.138 CC lib/iscsi/iscsi_rpc.o 00:02:41.396 CC lib/iscsi/task.o 00:02:41.655 LIB libspdk_ftl.a 00:02:41.655 SO libspdk_ftl.so.9.0 00:02:42.221 SYMLINK libspdk_ftl.so 00:02:42.792 LIB libspdk_vhost.a 00:02:42.792 SO libspdk_vhost.so.8.0 00:02:42.792 SYMLINK libspdk_vhost.so 00:02:43.051 LIB libspdk_iscsi.a 00:02:43.051 LIB libspdk_nvmf.a 00:02:43.051 SO libspdk_iscsi.so.8.0 00:02:43.309 SO libspdk_nvmf.so.18.0 00:02:43.309 SYMLINK libspdk_iscsi.so 00:02:43.309 SYMLINK libspdk_nvmf.so 00:02:43.568 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.826 CC module/accel/iaa/accel_iaa.o 00:02:43.826 CC module/accel/ioat/accel_ioat.o 00:02:43.826 CC module/accel/error/accel_error.o 00:02:43.826 CC module/accel/dsa/accel_dsa.o 00:02:43.826 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.826 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.826 CC module/accel/error/accel_error_rpc.o 00:02:43.826 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.826 CC module/blob/bdev/blob_bdev.o 00:02:43.826 CC module/keyring/file/keyring.o 00:02:43.826 CC module/keyring/file/keyring_rpc.o 00:02:43.826 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.826 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.826 CC module/sock/posix/posix.o 00:02:43.826 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.826 LIB libspdk_env_dpdk_rpc.a 00:02:43.826 SO libspdk_env_dpdk_rpc.so.6.0 00:02:43.826 SYMLINK libspdk_env_dpdk_rpc.so 00:02:43.826 LIB libspdk_keyring_file.a 00:02:43.826 LIB libspdk_scheduler_gscheduler.a 00:02:43.826 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.084 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.084 SO libspdk_keyring_file.so.1.0 00:02:44.084 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.084 LIB libspdk_accel_error.a 00:02:44.084 LIB libspdk_accel_ioat.a 00:02:44.084 LIB libspdk_scheduler_dynamic.a 00:02:44.084 SO libspdk_accel_error.so.2.0 00:02:44.084 LIB libspdk_accel_iaa.a 00:02:44.084 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.084 SO libspdk_accel_ioat.so.6.0 00:02:44.084 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.084 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.084 SYMLINK libspdk_keyring_file.so 00:02:44.084 SO libspdk_accel_iaa.so.3.0 00:02:44.084 LIB libspdk_accel_dsa.a 00:02:44.084 SYMLINK libspdk_accel_error.so 00:02:44.084 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.084 SYMLINK libspdk_accel_ioat.so 00:02:44.084 SO libspdk_accel_dsa.so.5.0 00:02:44.084 LIB libspdk_blob_bdev.a 00:02:44.084 SYMLINK libspdk_accel_iaa.so 00:02:44.084 SO libspdk_blob_bdev.so.11.0 00:02:44.084 SYMLINK libspdk_accel_dsa.so 00:02:44.084 SYMLINK libspdk_blob_bdev.so 00:02:44.343 CC module/bdev/null/bdev_null.o 00:02:44.343 CC module/bdev/null/bdev_null_rpc.o 00:02:44.343 CC module/bdev/malloc/bdev_malloc.o 00:02:44.343 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.343 CC module/bdev/gpt/gpt.o 00:02:44.343 CC module/bdev/ftl/bdev_ftl.o 00:02:44.343 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.343 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.343 CC module/bdev/error/vbdev_error.o 00:02:44.343 CC module/bdev/delay/vbdev_delay.o 00:02:44.343 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.343 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.343 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.343 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.343 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.343 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.343 CC module/bdev/raid/bdev_raid.o 00:02:44.343 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.343 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.343 CC module/bdev/nvme/bdev_nvme.o 00:02:44.343 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.343 CC module/bdev/split/vbdev_split.o 00:02:44.343 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.343 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.343 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.343 CC module/bdev/raid/raid0.o 00:02:44.343 CC module/bdev/nvme/nvme_rpc.o 00:02:44.343 CC module/bdev/raid/raid1.o 00:02:44.343 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.343 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.343 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.343 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.343 CC module/bdev/raid/concat.o 00:02:44.343 CC module/bdev/nvme/vbdev_opal.o 00:02:44.343 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.343 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.343 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.343 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.343 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.343 CC module/bdev/aio/bdev_aio.o 00:02:44.343 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.343 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.913 LIB libspdk_bdev_split.a 00:02:44.913 LIB libspdk_blobfs_bdev.a 00:02:44.913 SO libspdk_bdev_split.so.6.0 00:02:44.913 LIB libspdk_bdev_ftl.a 00:02:44.913 SO libspdk_blobfs_bdev.so.6.0 00:02:44.913 SO libspdk_bdev_ftl.so.6.0 00:02:44.913 SYMLINK libspdk_bdev_split.so 00:02:44.913 SYMLINK libspdk_blobfs_bdev.so 00:02:44.913 SYMLINK libspdk_bdev_ftl.so 00:02:44.913 LIB libspdk_sock_posix.a 00:02:44.913 LIB libspdk_bdev_passthru.a 00:02:44.913 LIB libspdk_bdev_error.a 00:02:44.913 LIB libspdk_bdev_gpt.a 00:02:44.913 LIB libspdk_bdev_null.a 00:02:44.913 SO libspdk_sock_posix.so.6.0 00:02:44.913 SO libspdk_bdev_passthru.so.6.0 00:02:44.913 SO libspdk_bdev_error.so.6.0 00:02:44.913 SO libspdk_bdev_gpt.so.6.0 00:02:44.913 SO libspdk_bdev_null.so.6.0 00:02:44.913 LIB libspdk_bdev_malloc.a 00:02:44.913 LIB libspdk_bdev_zone_block.a 00:02:45.172 LIB libspdk_bdev_aio.a 00:02:45.172 SO libspdk_bdev_malloc.so.6.0 00:02:45.172 SO libspdk_bdev_zone_block.so.6.0 00:02:45.172 LIB libspdk_bdev_iscsi.a 00:02:45.172 SYMLINK libspdk_bdev_passthru.so 00:02:45.172 SO libspdk_bdev_aio.so.6.0 00:02:45.172 SO libspdk_bdev_iscsi.so.6.0 00:02:45.172 SYMLINK libspdk_bdev_error.so 00:02:45.172 SYMLINK libspdk_bdev_gpt.so 00:02:45.172 SYMLINK libspdk_bdev_null.so 00:02:45.172 SYMLINK libspdk_sock_posix.so 00:02:45.172 LIB libspdk_bdev_delay.a 00:02:45.172 SYMLINK libspdk_bdev_malloc.so 00:02:45.172 SYMLINK libspdk_bdev_zone_block.so 00:02:45.172 SO libspdk_bdev_delay.so.6.0 00:02:45.172 SYMLINK libspdk_bdev_aio.so 00:02:45.172 SYMLINK libspdk_bdev_iscsi.so 00:02:45.172 SYMLINK libspdk_bdev_delay.so 00:02:45.172 LIB libspdk_bdev_lvol.a 00:02:45.172 LIB libspdk_bdev_virtio.a 00:02:45.432 SO libspdk_bdev_lvol.so.6.0 00:02:45.432 SO libspdk_bdev_virtio.so.6.0 00:02:45.432 SYMLINK libspdk_bdev_lvol.so 00:02:45.432 SYMLINK libspdk_bdev_virtio.so 00:02:46.002 LIB libspdk_bdev_raid.a 00:02:46.002 SO libspdk_bdev_raid.so.6.0 00:02:46.003 SYMLINK libspdk_bdev_raid.so 00:02:47.385 LIB libspdk_bdev_nvme.a 00:02:47.385 SO libspdk_bdev_nvme.so.7.0 00:02:47.644 SYMLINK libspdk_bdev_nvme.so 00:02:47.903 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.903 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.903 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.903 CC module/event/subsystems/vmd/vmd.o 00:02:47.903 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.903 CC module/event/subsystems/keyring/keyring.o 00:02:47.903 CC module/event/subsystems/sock/sock.o 00:02:47.903 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.903 LIB libspdk_event_keyring.a 00:02:47.903 LIB libspdk_event_sock.a 00:02:47.903 LIB libspdk_event_vhost_blk.a 00:02:47.903 LIB libspdk_event_scheduler.a 00:02:47.903 LIB libspdk_event_vmd.a 00:02:48.162 SO libspdk_event_keyring.so.1.0 00:02:48.162 SO libspdk_event_sock.so.5.0 00:02:48.162 LIB libspdk_event_iobuf.a 00:02:48.162 SO libspdk_event_vhost_blk.so.3.0 00:02:48.162 SO libspdk_event_scheduler.so.4.0 00:02:48.162 SO libspdk_event_vmd.so.6.0 00:02:48.162 SO libspdk_event_iobuf.so.3.0 00:02:48.162 SYMLINK libspdk_event_keyring.so 00:02:48.162 SYMLINK libspdk_event_sock.so 00:02:48.162 SYMLINK libspdk_event_vhost_blk.so 00:02:48.162 SYMLINK libspdk_event_scheduler.so 00:02:48.162 SYMLINK libspdk_event_vmd.so 00:02:48.162 SYMLINK libspdk_event_iobuf.so 00:02:48.421 CC module/event/subsystems/accel/accel.o 00:02:48.421 LIB libspdk_event_accel.a 00:02:48.421 SO libspdk_event_accel.so.6.0 00:02:48.421 SYMLINK libspdk_event_accel.so 00:02:48.680 CC module/event/subsystems/bdev/bdev.o 00:02:48.938 LIB libspdk_event_bdev.a 00:02:48.938 SO libspdk_event_bdev.so.6.0 00:02:48.938 SYMLINK libspdk_event_bdev.so 00:02:49.196 CC module/event/subsystems/ublk/ublk.o 00:02:49.196 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.196 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.196 CC module/event/subsystems/nbd/nbd.o 00:02:49.196 CC module/event/subsystems/scsi/scsi.o 00:02:49.196 LIB libspdk_event_nbd.a 00:02:49.196 LIB libspdk_event_ublk.a 00:02:49.196 LIB libspdk_event_scsi.a 00:02:49.196 SO libspdk_event_ublk.so.3.0 00:02:49.196 SO libspdk_event_nbd.so.6.0 00:02:49.196 SO libspdk_event_scsi.so.6.0 00:02:49.455 SYMLINK libspdk_event_ublk.so 00:02:49.455 SYMLINK libspdk_event_nbd.so 00:02:49.455 SYMLINK libspdk_event_scsi.so 00:02:49.455 LIB libspdk_event_nvmf.a 00:02:49.455 SO libspdk_event_nvmf.so.6.0 00:02:49.455 SYMLINK libspdk_event_nvmf.so 00:02:49.455 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.455 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:49.713 LIB libspdk_event_vhost_scsi.a 00:02:49.713 SO libspdk_event_vhost_scsi.so.3.0 00:02:49.713 LIB libspdk_event_iscsi.a 00:02:49.713 SO libspdk_event_iscsi.so.6.0 00:02:49.713 SYMLINK libspdk_event_vhost_scsi.so 00:02:49.713 SYMLINK libspdk_event_iscsi.so 00:02:50.078 SO libspdk.so.6.0 00:02:50.078 SYMLINK libspdk.so 00:02:50.078 CXX app/trace/trace.o 00:02:50.078 CC app/trace_record/trace_record.o 00:02:50.078 CC app/spdk_nvme_identify/identify.o 00:02:50.078 CC test/rpc_client/rpc_client_test.o 00:02:50.078 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.078 CC app/spdk_top/spdk_top.o 00:02:50.078 CC app/spdk_nvme_perf/perf.o 00:02:50.078 TEST_HEADER include/spdk/accel.h 00:02:50.078 CC app/spdk_lspci/spdk_lspci.o 00:02:50.078 TEST_HEADER include/spdk/accel_module.h 00:02:50.078 TEST_HEADER include/spdk/assert.h 00:02:50.078 TEST_HEADER include/spdk/barrier.h 00:02:50.078 TEST_HEADER include/spdk/base64.h 00:02:50.078 TEST_HEADER include/spdk/bdev.h 00:02:50.078 TEST_HEADER include/spdk/bdev_module.h 00:02:50.078 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.078 TEST_HEADER include/spdk/bit_array.h 00:02:50.078 TEST_HEADER include/spdk/bit_pool.h 00:02:50.078 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.078 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.078 TEST_HEADER include/spdk/blobfs.h 00:02:50.078 TEST_HEADER include/spdk/blob.h 00:02:50.078 TEST_HEADER include/spdk/conf.h 00:02:50.078 TEST_HEADER include/spdk/config.h 00:02:50.078 TEST_HEADER include/spdk/cpuset.h 00:02:50.078 TEST_HEADER include/spdk/crc16.h 00:02:50.078 CC app/spdk_dd/spdk_dd.o 00:02:50.078 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.078 CC app/iscsi_tgt/iscsi_tgt.o 00:02:50.078 TEST_HEADER include/spdk/crc32.h 00:02:50.342 TEST_HEADER include/spdk/crc64.h 00:02:50.342 CC app/vhost/vhost.o 00:02:50.342 TEST_HEADER include/spdk/dif.h 00:02:50.342 CC app/nvmf_tgt/nvmf_main.o 00:02:50.342 TEST_HEADER include/spdk/dma.h 00:02:50.342 TEST_HEADER include/spdk/endian.h 00:02:50.342 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.342 TEST_HEADER include/spdk/env.h 00:02:50.342 TEST_HEADER include/spdk/event.h 00:02:50.342 TEST_HEADER include/spdk/fd_group.h 00:02:50.342 TEST_HEADER include/spdk/fd.h 00:02:50.342 TEST_HEADER include/spdk/file.h 00:02:50.342 TEST_HEADER include/spdk/ftl.h 00:02:50.342 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.342 TEST_HEADER include/spdk/hexlify.h 00:02:50.342 TEST_HEADER include/spdk/histogram_data.h 00:02:50.342 TEST_HEADER include/spdk/idxd.h 00:02:50.343 CC app/spdk_tgt/spdk_tgt.o 00:02:50.343 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.343 TEST_HEADER include/spdk/init.h 00:02:50.343 TEST_HEADER include/spdk/ioat.h 00:02:50.343 CC examples/idxd/perf/perf.o 00:02:50.343 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.343 CC app/fio/nvme/fio_plugin.o 00:02:50.343 CC test/app/histogram_perf/histogram_perf.o 00:02:50.343 CC examples/ioat/perf/perf.o 00:02:50.343 CC test/app/jsoncat/jsoncat.o 00:02:50.343 CC examples/sock/hello_world/hello_sock.o 00:02:50.343 CC examples/nvme/reconnect/reconnect.o 00:02:50.343 CC examples/util/zipf/zipf.o 00:02:50.343 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.343 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:50.343 TEST_HEADER include/spdk/json.h 00:02:50.343 CC examples/nvme/hello_world/hello_world.o 00:02:50.343 CC examples/ioat/verify/verify.o 00:02:50.343 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.343 CC test/nvme/aer/aer.o 00:02:50.343 TEST_HEADER include/spdk/keyring.h 00:02:50.343 CC examples/vmd/led/led.o 00:02:50.343 TEST_HEADER include/spdk/keyring_module.h 00:02:50.343 CC test/app/stub/stub.o 00:02:50.343 CC test/thread/poller_perf/poller_perf.o 00:02:50.343 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.343 CC examples/accel/perf/accel_perf.o 00:02:50.343 CC test/event/event_perf/event_perf.o 00:02:50.343 TEST_HEADER include/spdk/likely.h 00:02:50.343 TEST_HEADER include/spdk/log.h 00:02:50.343 TEST_HEADER include/spdk/lvol.h 00:02:50.343 TEST_HEADER include/spdk/memory.h 00:02:50.343 TEST_HEADER include/spdk/mmio.h 00:02:50.343 TEST_HEADER include/spdk/nbd.h 00:02:50.343 TEST_HEADER include/spdk/notify.h 00:02:50.343 TEST_HEADER include/spdk/nvme.h 00:02:50.343 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.343 CC examples/blob/hello_world/hello_blob.o 00:02:50.343 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.343 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.343 CC test/accel/dif/dif.o 00:02:50.343 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.343 CC examples/thread/thread/thread_ex.o 00:02:50.343 CC examples/blob/cli/blobcli.o 00:02:50.343 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.343 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.343 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.343 CC test/blobfs/mkfs/mkfs.o 00:02:50.343 CC app/fio/bdev/fio_plugin.o 00:02:50.343 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.343 CC test/bdev/bdevio/bdevio.o 00:02:50.343 CC test/dma/test_dma/test_dma.o 00:02:50.343 CC examples/nvmf/nvmf/nvmf.o 00:02:50.343 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.343 CC test/app/bdev_svc/bdev_svc.o 00:02:50.343 TEST_HEADER include/spdk/nvmf.h 00:02:50.343 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.343 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.343 TEST_HEADER include/spdk/opal.h 00:02:50.343 TEST_HEADER include/spdk/opal_spec.h 00:02:50.343 TEST_HEADER include/spdk/pci_ids.h 00:02:50.343 TEST_HEADER include/spdk/pipe.h 00:02:50.343 TEST_HEADER include/spdk/queue.h 00:02:50.343 TEST_HEADER include/spdk/reduce.h 00:02:50.343 TEST_HEADER include/spdk/rpc.h 00:02:50.343 TEST_HEADER include/spdk/scheduler.h 00:02:50.343 TEST_HEADER include/spdk/scsi.h 00:02:50.343 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.343 TEST_HEADER include/spdk/sock.h 00:02:50.343 TEST_HEADER include/spdk/stdinc.h 00:02:50.343 TEST_HEADER include/spdk/string.h 00:02:50.343 TEST_HEADER include/spdk/thread.h 00:02:50.343 LINK spdk_lspci 00:02:50.343 TEST_HEADER include/spdk/trace.h 00:02:50.343 TEST_HEADER include/spdk/trace_parser.h 00:02:50.343 TEST_HEADER include/spdk/tree.h 00:02:50.343 CC test/env/mem_callbacks/mem_callbacks.o 00:02:50.343 TEST_HEADER include/spdk/ublk.h 00:02:50.343 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.343 TEST_HEADER include/spdk/util.h 00:02:50.343 TEST_HEADER include/spdk/uuid.h 00:02:50.343 CC test/lvol/esnap/esnap.o 00:02:50.343 TEST_HEADER include/spdk/version.h 00:02:50.343 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.607 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.607 TEST_HEADER include/spdk/vhost.h 00:02:50.607 TEST_HEADER include/spdk/vmd.h 00:02:50.607 TEST_HEADER include/spdk/xor.h 00:02:50.607 TEST_HEADER include/spdk/zipf.h 00:02:50.607 CXX test/cpp_headers/accel.o 00:02:50.607 LINK rpc_client_test 00:02:50.607 LINK jsoncat 00:02:50.607 LINK interrupt_tgt 00:02:50.607 LINK spdk_nvme_discover 00:02:50.607 LINK histogram_perf 00:02:50.607 LINK nvmf_tgt 00:02:50.607 LINK vhost 00:02:50.607 LINK lsvmd 00:02:50.607 LINK poller_perf 00:02:50.607 LINK led 00:02:50.607 LINK event_perf 00:02:50.607 LINK iscsi_tgt 00:02:50.607 LINK zipf 00:02:50.607 LINK stub 00:02:50.607 LINK spdk_trace_record 00:02:50.607 LINK spdk_tgt 00:02:50.875 LINK bdev_svc 00:02:50.875 LINK verify 00:02:50.875 LINK mkfs 00:02:50.875 LINK hello_world 00:02:50.875 LINK ioat_perf 00:02:50.875 LINK hello_bdev 00:02:50.875 LINK hello_sock 00:02:50.875 LINK hello_blob 00:02:50.875 CXX test/cpp_headers/accel_module.o 00:02:50.875 LINK thread 00:02:50.875 CC test/nvme/reset/reset.o 00:02:50.875 LINK aer 00:02:50.875 LINK spdk_dd 00:02:50.875 CXX test/cpp_headers/assert.o 00:02:50.875 CC test/env/vtophys/vtophys.o 00:02:51.138 CXX test/cpp_headers/barrier.o 00:02:51.138 LINK nvmf 00:02:51.138 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.138 CC test/nvme/sgl/sgl.o 00:02:51.138 LINK idxd_perf 00:02:51.138 CC examples/nvme/arbitration/arbitration.o 00:02:51.138 CC test/event/reactor/reactor.o 00:02:51.138 CC test/event/reactor_perf/reactor_perf.o 00:02:51.138 LINK reconnect 00:02:51.138 CC examples/nvme/hotplug/hotplug.o 00:02:51.138 LINK spdk_trace 00:02:51.138 CXX test/cpp_headers/base64.o 00:02:51.138 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.138 CXX test/cpp_headers/bdev.o 00:02:51.138 CC test/nvme/e2edp/nvme_dp.o 00:02:51.138 CC test/event/app_repeat/app_repeat.o 00:02:51.138 LINK test_dma 00:02:51.138 CC test/nvme/overhead/overhead.o 00:02:51.138 CC test/nvme/err_injection/err_injection.o 00:02:51.138 LINK dif 00:02:51.138 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:51.138 LINK bdevio 00:02:51.138 CC test/event/scheduler/scheduler.o 00:02:51.138 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.138 CXX test/cpp_headers/bdev_module.o 00:02:51.402 CC test/nvme/startup/startup.o 00:02:51.402 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.402 LINK vtophys 00:02:51.402 CC test/nvme/reserve/reserve.o 00:02:51.402 CXX test/cpp_headers/bdev_zone.o 00:02:51.402 CC test/env/memory/memory_ut.o 00:02:51.402 LINK reactor 00:02:51.402 LINK reactor_perf 00:02:51.402 LINK nvme_fuzz 00:02:51.402 CC test/nvme/simple_copy/simple_copy.o 00:02:51.402 CC test/nvme/connect_stress/connect_stress.o 00:02:51.402 LINK accel_perf 00:02:51.402 CXX test/cpp_headers/bit_array.o 00:02:51.402 LINK blobcli 00:02:51.402 LINK app_repeat 00:02:51.402 LINK reset 00:02:51.402 CC examples/nvme/abort/abort.o 00:02:51.402 CC test/nvme/boot_partition/boot_partition.o 00:02:51.665 LINK nvme_manage 00:02:51.665 CC test/env/pci/pci_ut.o 00:02:51.665 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.665 CC test/nvme/compliance/nvme_compliance.o 00:02:51.665 CC test/nvme/fused_ordering/fused_ordering.o 00:02:51.665 CXX test/cpp_headers/bit_pool.o 00:02:51.665 CXX test/cpp_headers/blob_bdev.o 00:02:51.665 CC test/nvme/fdp/fdp.o 00:02:51.665 LINK hotplug 00:02:51.665 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:51.665 LINK mem_callbacks 00:02:51.665 LINK cmb_copy 00:02:51.665 LINK env_dpdk_post_init 00:02:51.665 LINK err_injection 00:02:51.665 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.665 LINK startup 00:02:51.665 CXX test/cpp_headers/blobfs.o 00:02:51.665 CXX test/cpp_headers/blob.o 00:02:51.665 CXX test/cpp_headers/conf.o 00:02:51.665 LINK sgl 00:02:51.665 CXX test/cpp_headers/config.o 00:02:51.665 LINK scheduler 00:02:51.665 CC test/nvme/cuse/cuse.o 00:02:51.665 LINK spdk_bdev 00:02:51.665 CXX test/cpp_headers/cpuset.o 00:02:51.927 CXX test/cpp_headers/crc16.o 00:02:51.927 LINK reserve 00:02:51.927 CXX test/cpp_headers/crc32.o 00:02:51.927 LINK nvme_dp 00:02:51.927 LINK arbitration 00:02:51.927 CXX test/cpp_headers/crc64.o 00:02:51.927 CXX test/cpp_headers/dif.o 00:02:51.927 CXX test/cpp_headers/dma.o 00:02:51.927 CXX test/cpp_headers/endian.o 00:02:51.927 CXX test/cpp_headers/env_dpdk.o 00:02:51.927 LINK overhead 00:02:51.927 LINK connect_stress 00:02:51.927 CXX test/cpp_headers/env.o 00:02:51.927 LINK spdk_nvme 00:02:51.927 LINK boot_partition 00:02:51.927 CXX test/cpp_headers/event.o 00:02:51.927 LINK pmr_persistence 00:02:51.927 CXX test/cpp_headers/fd_group.o 00:02:51.927 CXX test/cpp_headers/fd.o 00:02:51.927 CXX test/cpp_headers/file.o 00:02:51.927 CXX test/cpp_headers/ftl.o 00:02:51.927 CXX test/cpp_headers/gpt_spec.o 00:02:51.927 CXX test/cpp_headers/hexlify.o 00:02:51.927 LINK simple_copy 00:02:51.927 CXX test/cpp_headers/histogram_data.o 00:02:51.927 CXX test/cpp_headers/idxd.o 00:02:51.927 CXX test/cpp_headers/idxd_spec.o 00:02:51.927 LINK fused_ordering 00:02:51.927 CXX test/cpp_headers/init.o 00:02:52.194 CXX test/cpp_headers/ioat.o 00:02:52.194 CXX test/cpp_headers/ioat_spec.o 00:02:52.194 LINK doorbell_aers 00:02:52.194 CXX test/cpp_headers/iscsi_spec.o 00:02:52.194 CXX test/cpp_headers/json.o 00:02:52.194 CXX test/cpp_headers/jsonrpc.o 00:02:52.194 CXX test/cpp_headers/keyring.o 00:02:52.194 CXX test/cpp_headers/keyring_module.o 00:02:52.194 CXX test/cpp_headers/likely.o 00:02:52.194 CXX test/cpp_headers/log.o 00:02:52.194 CXX test/cpp_headers/lvol.o 00:02:52.194 CXX test/cpp_headers/memory.o 00:02:52.194 LINK vhost_fuzz 00:02:52.194 CXX test/cpp_headers/mmio.o 00:02:52.194 LINK spdk_nvme_perf 00:02:52.194 CXX test/cpp_headers/nbd.o 00:02:52.194 CXX test/cpp_headers/notify.o 00:02:52.194 CXX test/cpp_headers/nvme.o 00:02:52.194 CXX test/cpp_headers/nvme_intel.o 00:02:52.194 CXX test/cpp_headers/nvme_ocssd.o 00:02:52.194 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:52.194 LINK spdk_nvme_identify 00:02:52.194 CXX test/cpp_headers/nvme_spec.o 00:02:52.194 CXX test/cpp_headers/nvme_zns.o 00:02:52.194 CXX test/cpp_headers/nvmf_cmd.o 00:02:52.194 LINK bdevperf 00:02:52.194 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:52.461 CXX test/cpp_headers/nvmf.o 00:02:52.461 CXX test/cpp_headers/nvmf_spec.o 00:02:52.461 CXX test/cpp_headers/nvmf_transport.o 00:02:52.461 CXX test/cpp_headers/opal.o 00:02:52.461 CXX test/cpp_headers/opal_spec.o 00:02:52.461 CXX test/cpp_headers/pci_ids.o 00:02:52.461 LINK fdp 00:02:52.461 CXX test/cpp_headers/pipe.o 00:02:52.461 CXX test/cpp_headers/queue.o 00:02:52.461 LINK nvme_compliance 00:02:52.461 CXX test/cpp_headers/reduce.o 00:02:52.461 LINK abort 00:02:52.461 CXX test/cpp_headers/rpc.o 00:02:52.461 CXX test/cpp_headers/scheduler.o 00:02:52.461 CXX test/cpp_headers/scsi.o 00:02:52.461 CXX test/cpp_headers/scsi_spec.o 00:02:52.461 CXX test/cpp_headers/sock.o 00:02:52.461 CXX test/cpp_headers/stdinc.o 00:02:52.461 CXX test/cpp_headers/string.o 00:02:52.461 LINK spdk_top 00:02:52.461 CXX test/cpp_headers/thread.o 00:02:52.461 CXX test/cpp_headers/trace.o 00:02:52.461 LINK pci_ut 00:02:52.461 CXX test/cpp_headers/trace_parser.o 00:02:52.461 CXX test/cpp_headers/tree.o 00:02:52.461 CXX test/cpp_headers/ublk.o 00:02:52.461 CXX test/cpp_headers/util.o 00:02:52.461 CXX test/cpp_headers/uuid.o 00:02:52.461 CXX test/cpp_headers/version.o 00:02:52.461 CXX test/cpp_headers/vfio_user_pci.o 00:02:52.461 CXX test/cpp_headers/vfio_user_spec.o 00:02:52.461 CXX test/cpp_headers/vhost.o 00:02:52.461 CXX test/cpp_headers/vmd.o 00:02:52.461 CXX test/cpp_headers/xor.o 00:02:52.461 CXX test/cpp_headers/zipf.o 00:02:53.047 LINK memory_ut 00:02:53.314 LINK cuse 00:02:53.886 LINK iscsi_fuzz 00:02:57.180 LINK esnap 00:02:57.749 00:02:57.749 real 1m14.907s 00:02:57.749 user 11m8.445s 00:02:57.749 sys 2m24.630s 00:02:57.749 14:39:57 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:57.749 14:39:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.749 ************************************ 00:02:57.749 END TEST make 00:02:57.749 ************************************ 00:02:57.749 14:39:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:57.749 14:39:57 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:57.749 14:39:57 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:57.749 14:39:57 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.749 14:39:57 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:57.749 14:39:57 -- pm/common@45 -- $ pid=10076 00:02:57.749 14:39:57 -- pm/common@52 -- $ sudo kill -TERM 10076 00:02:57.749 14:39:57 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.749 14:39:57 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:57.749 14:39:57 -- pm/common@45 -- $ pid=10078 00:02:57.749 14:39:57 -- pm/common@52 -- $ sudo kill -TERM 10078 00:02:57.749 14:39:57 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.749 14:39:57 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:57.749 14:39:57 -- pm/common@45 -- $ pid=10077 00:02:57.749 14:39:57 -- pm/common@52 -- $ sudo kill -TERM 10077 00:02:57.749 14:39:57 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.749 14:39:57 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:57.749 14:39:57 -- pm/common@45 -- $ pid=10075 00:02:57.749 14:39:57 -- pm/common@52 -- $ sudo kill -TERM 10075 00:02:58.008 14:39:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:58.008 14:39:57 -- nvmf/common.sh@7 -- # uname -s 00:02:58.008 14:39:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:58.008 14:39:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:58.008 14:39:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:58.008 14:39:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:58.008 14:39:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:58.008 14:39:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:58.008 14:39:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:58.008 14:39:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:58.008 14:39:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:58.008 14:39:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:58.008 14:39:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:02:58.008 14:39:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:02:58.008 14:39:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:58.008 14:39:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:58.008 14:39:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:58.008 14:39:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:58.008 14:39:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:58.008 14:39:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:58.008 14:39:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.008 14:39:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.008 14:39:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.008 14:39:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.008 14:39:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.008 14:39:57 -- paths/export.sh@5 -- # export PATH 00:02:58.008 14:39:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.008 14:39:57 -- nvmf/common.sh@47 -- # : 0 00:02:58.008 14:39:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:58.008 14:39:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:58.008 14:39:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:58.008 14:39:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:58.008 14:39:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:58.008 14:39:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:58.008 14:39:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:58.008 14:39:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:58.008 14:39:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:58.008 14:39:57 -- spdk/autotest.sh@32 -- # uname -s 00:02:58.008 14:39:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:58.008 14:39:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:58.008 14:39:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:58.008 14:39:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:58.008 14:39:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:58.008 14:39:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:58.008 14:39:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:58.008 14:39:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:58.008 14:39:57 -- spdk/autotest.sh@48 -- # udevadm_pid=68849 00:02:58.008 14:39:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:58.008 14:39:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:58.008 14:39:57 -- pm/common@17 -- # local monitor 00:02:58.008 14:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.008 14:39:57 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=68854 00:02:58.008 14:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.008 14:39:57 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=68856 00:02:58.008 14:39:57 -- pm/common@21 -- # date +%s 00:02:58.008 14:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.008 14:39:57 -- pm/common@21 -- # date +%s 00:02:58.008 14:39:57 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=68859 00:02:58.008 14:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.008 14:39:57 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=68864 00:02:58.008 14:39:57 -- pm/common@21 -- # date +%s 00:02:58.008 14:39:57 -- pm/common@26 -- # sleep 1 00:02:58.008 14:39:57 -- pm/common@21 -- # date +%s 00:02:58.008 14:39:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135197 00:02:58.008 14:39:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135197 00:02:58.008 14:39:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135197 00:02:58.008 14:39:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714135197 00:02:58.008 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135197_collect-vmstat.pm.log 00:02:58.008 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135197_collect-bmc-pm.bmc.pm.log 00:02:58.008 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135197_collect-cpu-load.pm.log 00:02:58.008 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714135197_collect-cpu-temp.pm.log 00:02:58.945 14:39:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:58.946 14:39:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:58.946 14:39:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:58.946 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:02:58.946 14:39:58 -- spdk/autotest.sh@59 -- # create_test_list 00:02:58.946 14:39:58 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:58.946 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:02:58.946 14:39:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:58.946 14:39:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.946 14:39:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.946 14:39:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:58.946 14:39:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:58.946 14:39:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:58.946 14:39:58 -- common/autotest_common.sh@1441 -- # uname 00:02:58.946 14:39:58 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:58.946 14:39:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:58.946 14:39:58 -- common/autotest_common.sh@1461 -- # uname 00:02:58.946 14:39:58 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:58.946 14:39:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:58.946 14:39:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:58.946 14:39:58 -- spdk/autotest.sh@72 -- # hash lcov 00:02:58.946 14:39:58 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:58.946 14:39:58 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:58.946 --rc lcov_branch_coverage=1 00:02:58.946 --rc lcov_function_coverage=1 00:02:58.946 --rc genhtml_branch_coverage=1 00:02:58.946 --rc genhtml_function_coverage=1 00:02:58.946 --rc genhtml_legend=1 00:02:58.946 --rc geninfo_all_blocks=1 00:02:58.946 ' 00:02:58.946 14:39:58 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:58.946 --rc lcov_branch_coverage=1 00:02:58.946 --rc lcov_function_coverage=1 00:02:58.946 --rc genhtml_branch_coverage=1 00:02:58.946 --rc genhtml_function_coverage=1 00:02:58.946 --rc genhtml_legend=1 00:02:58.946 --rc geninfo_all_blocks=1 00:02:58.946 ' 00:02:58.946 14:39:58 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:58.946 --rc lcov_branch_coverage=1 00:02:58.946 --rc lcov_function_coverage=1 00:02:58.946 --rc genhtml_branch_coverage=1 00:02:58.946 --rc genhtml_function_coverage=1 00:02:58.946 --rc genhtml_legend=1 00:02:58.946 --rc geninfo_all_blocks=1 00:02:58.946 --no-external' 00:02:58.946 14:39:58 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:58.946 --rc lcov_branch_coverage=1 00:02:58.946 --rc lcov_function_coverage=1 00:02:58.946 --rc genhtml_branch_coverage=1 00:02:58.946 --rc genhtml_function_coverage=1 00:02:58.946 --rc genhtml_legend=1 00:02:58.946 --rc geninfo_all_blocks=1 00:02:58.946 --no-external' 00:02:58.946 14:39:58 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:59.203 lcov: LCOV version 1.14 00:02:59.203 14:39:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:09.172 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:09.172 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:09.173 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:09.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:12.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:12.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:24.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:24.665 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:24.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:24.665 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:24.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:24.665 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:32.787 14:40:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:32.787 14:40:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:32.787 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:03:32.787 14:40:31 -- spdk/autotest.sh@91 -- # rm -f 00:03:32.787 14:40:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.787 0000:81:00.0 (8086 0a54): Already using the nvme driver 00:03:32.787 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:32.787 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:32.787 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:32.787 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:32.787 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:32.787 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:32.787 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:32.787 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:32.787 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:32.787 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:32.787 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:32.787 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:32.787 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:32.787 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:32.787 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:32.787 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:32.787 14:40:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:32.787 14:40:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:32.787 14:40:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:32.787 14:40:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:32.787 14:40:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:32.787 14:40:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:32.787 14:40:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:32.787 14:40:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.787 14:40:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:32.787 14:40:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:32.787 14:40:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:32.787 14:40:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:32.787 14:40:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:32.787 14:40:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:32.787 14:40:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:32.787 No valid GPT data, bailing 00:03:32.787 14:40:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.787 14:40:32 -- scripts/common.sh@391 -- # pt= 00:03:32.787 14:40:32 -- scripts/common.sh@392 -- # return 1 00:03:32.788 14:40:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:32.788 1+0 records in 00:03:32.788 1+0 records out 00:03:32.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0021131 s, 496 MB/s 00:03:32.788 14:40:32 -- spdk/autotest.sh@118 -- # sync 00:03:33.046 14:40:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.046 14:40:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.046 14:40:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:34.946 14:40:34 -- spdk/autotest.sh@124 -- # uname -s 00:03:34.946 14:40:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:34.946 14:40:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.946 14:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.946 14:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.946 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:03:34.946 ************************************ 00:03:34.946 START TEST setup.sh 00:03:34.946 ************************************ 00:03:34.946 14:40:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.946 * Looking for test storage... 00:03:34.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:34.946 14:40:34 -- setup/test-setup.sh@10 -- # uname -s 00:03:34.946 14:40:34 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:34.946 14:40:34 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:34.946 14:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.946 14:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.946 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:03:35.204 ************************************ 00:03:35.204 START TEST acl 00:03:35.204 ************************************ 00:03:35.204 14:40:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:35.204 * Looking for test storage... 00:03:35.204 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.204 14:40:35 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.204 14:40:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:35.204 14:40:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:35.204 14:40:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:35.204 14:40:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:35.204 14:40:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:35.204 14:40:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:35.204 14:40:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.204 14:40:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:35.204 14:40:35 -- setup/acl.sh@12 -- # devs=() 00:03:35.204 14:40:35 -- setup/acl.sh@12 -- # declare -a devs 00:03:35.204 14:40:35 -- setup/acl.sh@13 -- # drivers=() 00:03:35.204 14:40:35 -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.204 14:40:35 -- setup/acl.sh@51 -- # setup reset 00:03:35.204 14:40:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.204 14:40:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.583 14:40:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:36.583 14:40:36 -- setup/acl.sh@16 -- # local dev driver 00:03:36.583 14:40:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.583 14:40:36 -- setup/acl.sh@15 -- # setup output status 00:03:36.583 14:40:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.583 14:40:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:37.958 Hugepages 00:03:37.958 node hugesize free / total 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 00:03:37.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # continue 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@19 -- # [[ 0000:81:00.0 == *:*:*.* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:37.958 14:40:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:03:37.958 14:40:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:37.958 14:40:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:37.958 14:40:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:37.958 14:40:37 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:37.958 14:40:37 -- setup/acl.sh@54 -- # run_test denied denied 00:03:37.958 14:40:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.958 14:40:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.958 14:40:37 -- common/autotest_common.sh@10 -- # set +x 00:03:37.958 ************************************ 00:03:37.958 START TEST denied 00:03:37.958 ************************************ 00:03:37.958 14:40:37 -- common/autotest_common.sh@1111 -- # denied 00:03:37.958 14:40:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:81:00.0' 00:03:37.958 14:40:37 -- setup/acl.sh@38 -- # setup output config 00:03:37.958 14:40:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:81:00.0' 00:03:37.958 14:40:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.958 14:40:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:39.333 0000:81:00.0 (8086 0a54): Skipping denied controller at 0000:81:00.0 00:03:39.333 14:40:39 -- setup/acl.sh@40 -- # verify 0000:81:00.0 00:03:39.333 14:40:39 -- setup/acl.sh@28 -- # local dev driver 00:03:39.333 14:40:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:39.333 14:40:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:81:00.0 ]] 00:03:39.333 14:40:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:81:00.0/driver 00:03:39.333 14:40:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:39.333 14:40:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:39.333 14:40:39 -- setup/acl.sh@41 -- # setup reset 00:03:39.333 14:40:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.333 14:40:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.868 00:03:41.868 real 0m3.802s 00:03:41.868 user 0m1.103s 00:03:41.868 sys 0m1.797s 00:03:41.868 14:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:41.868 14:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:41.868 ************************************ 00:03:41.868 END TEST denied 00:03:41.868 ************************************ 00:03:41.868 14:40:41 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.868 14:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.868 14:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.868 14:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:41.868 ************************************ 00:03:41.868 START TEST allowed 00:03:41.868 ************************************ 00:03:41.868 14:40:41 -- common/autotest_common.sh@1111 -- # allowed 00:03:41.868 14:40:41 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:81:00.0 00:03:41.868 14:40:41 -- setup/acl.sh@45 -- # setup output config 00:03:41.868 14:40:41 -- setup/acl.sh@46 -- # grep -E '0000:81:00.0 .*: nvme -> .*' 00:03:41.868 14:40:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.868 14:40:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:45.159 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.159 14:40:45 -- setup/acl.sh@47 -- # verify 00:03:45.159 14:40:45 -- setup/acl.sh@28 -- # local dev driver 00:03:45.159 14:40:45 -- setup/acl.sh@48 -- # setup reset 00:03:45.159 14:40:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.159 14:40:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.066 00:03:47.066 real 0m4.867s 00:03:47.066 user 0m1.124s 00:03:47.066 sys 0m1.755s 00:03:47.066 14:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:47.066 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:47.066 ************************************ 00:03:47.066 END TEST allowed 00:03:47.066 ************************************ 00:03:47.066 00:03:47.066 real 0m11.721s 00:03:47.066 user 0m3.442s 00:03:47.066 sys 0m5.446s 00:03:47.066 14:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:47.066 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:47.066 ************************************ 00:03:47.066 END TEST acl 00:03:47.066 ************************************ 00:03:47.066 14:40:46 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.066 14:40:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.066 14:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.066 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:47.066 ************************************ 00:03:47.066 START TEST hugepages 00:03:47.066 ************************************ 00:03:47.066 14:40:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:47.066 * Looking for test storage... 00:03:47.066 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:47.066 14:40:46 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:47.067 14:40:46 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:47.067 14:40:46 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:47.067 14:40:46 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:47.067 14:40:46 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:47.067 14:40:46 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:47.067 14:40:46 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:47.067 14:40:46 -- setup/common.sh@18 -- # local node= 00:03:47.067 14:40:46 -- setup/common.sh@19 -- # local var val 00:03:47.067 14:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.067 14:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.067 14:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.067 14:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.067 14:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.067 14:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44539180 kB' 'MemAvailable: 48072640 kB' 'Buffers: 7732 kB' 'Cached: 9112912 kB' 'SwapCached: 0 kB' 'Active: 6469660 kB' 'Inactive: 3404216 kB' 'Active(anon): 5925632 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 756984 kB' 'Mapped: 143736 kB' 'Shmem: 5172400 kB' 'KReclaimable: 157680 kB' 'Slab: 444016 kB' 'SReclaimable: 157680 kB' 'SUnreclaim: 286336 kB' 'KernelStack: 12960 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 7543360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193720 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.067 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.067 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # continue 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.068 14:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.068 14:40:46 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:47.068 14:40:46 -- setup/common.sh@33 -- # echo 2048 00:03:47.068 14:40:46 -- setup/common.sh@33 -- # return 0 00:03:47.068 14:40:46 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:47.068 14:40:46 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:47.068 14:40:46 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:47.068 14:40:46 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:47.068 14:40:46 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:47.068 14:40:46 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:47.068 14:40:46 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:47.068 14:40:46 -- setup/hugepages.sh@207 -- # get_nodes 00:03:47.068 14:40:46 -- setup/hugepages.sh@27 -- # local node 00:03:47.068 14:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.068 14:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:47.068 14:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.068 14:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.068 14:40:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.068 14:40:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.068 14:40:46 -- setup/hugepages.sh@208 -- # clear_hp 00:03:47.068 14:40:46 -- setup/hugepages.sh@37 -- # local node hp 00:03:47.068 14:40:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.068 14:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.068 14:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.068 14:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.068 14:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.068 14:40:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.068 14:40:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.068 14:40:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.068 14:40:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.068 14:40:47 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:47.068 14:40:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.068 14:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.068 14:40:47 -- common/autotest_common.sh@10 -- # set +x 00:03:47.068 ************************************ 00:03:47.068 START TEST default_setup 00:03:47.068 ************************************ 00:03:47.068 14:40:47 -- common/autotest_common.sh@1111 -- # default_setup 00:03:47.068 14:40:47 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.068 14:40:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.068 14:40:47 -- setup/hugepages.sh@51 -- # shift 00:03:47.068 14:40:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.068 14:40:47 -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.068 14:40:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.068 14:40:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.068 14:40:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.068 14:40:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.068 14:40:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.068 14:40:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.068 14:40:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.068 14:40:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.068 14:40:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.068 14:40:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.068 14:40:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.068 14:40:47 -- setup/hugepages.sh@73 -- # return 0 00:03:47.068 14:40:47 -- setup/hugepages.sh@137 -- # setup output 00:03:47.068 14:40:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.068 14:40:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.448 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.448 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.448 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.369 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.369 14:40:50 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:50.369 14:40:50 -- setup/hugepages.sh@89 -- # local node 00:03:50.369 14:40:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.369 14:40:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.369 14:40:50 -- setup/hugepages.sh@92 -- # local surp 00:03:50.369 14:40:50 -- setup/hugepages.sh@93 -- # local resv 00:03:50.369 14:40:50 -- setup/hugepages.sh@94 -- # local anon 00:03:50.369 14:40:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.369 14:40:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.369 14:40:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.369 14:40:50 -- setup/common.sh@18 -- # local node= 00:03:50.369 14:40:50 -- setup/common.sh@19 -- # local var val 00:03:50.369 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.369 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.369 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.369 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.369 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.369 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46590992 kB' 'MemAvailable: 50124444 kB' 'Buffers: 7732 kB' 'Cached: 9113008 kB' 'SwapCached: 0 kB' 'Active: 6494252 kB' 'Inactive: 3404216 kB' 'Active(anon): 5950224 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781132 kB' 'Mapped: 144092 kB' 'Shmem: 5172496 kB' 'KReclaimable: 157664 kB' 'Slab: 443292 kB' 'SReclaimable: 157664 kB' 'SUnreclaim: 285628 kB' 'KernelStack: 13104 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193768 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.369 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.369 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.370 14:40:50 -- setup/common.sh@33 -- # echo 0 00:03:50.370 14:40:50 -- setup/common.sh@33 -- # return 0 00:03:50.370 14:40:50 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.370 14:40:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.370 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.370 14:40:50 -- setup/common.sh@18 -- # local node= 00:03:50.370 14:40:50 -- setup/common.sh@19 -- # local var val 00:03:50.370 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.370 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.370 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.370 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.370 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.370 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46593324 kB' 'MemAvailable: 50126776 kB' 'Buffers: 7732 kB' 'Cached: 9113012 kB' 'SwapCached: 0 kB' 'Active: 6492848 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948820 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779760 kB' 'Mapped: 143960 kB' 'Shmem: 5172500 kB' 'KReclaimable: 157664 kB' 'Slab: 443324 kB' 'SReclaimable: 157664 kB' 'SUnreclaim: 285660 kB' 'KernelStack: 12768 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193672 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.370 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.370 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.371 14:40:50 -- setup/common.sh@33 -- # echo 0 00:03:50.371 14:40:50 -- setup/common.sh@33 -- # return 0 00:03:50.371 14:40:50 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.371 14:40:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.371 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.371 14:40:50 -- setup/common.sh@18 -- # local node= 00:03:50.371 14:40:50 -- setup/common.sh@19 -- # local var val 00:03:50.371 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.371 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.371 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.371 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.371 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.371 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.371 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.371 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46593256 kB' 'MemAvailable: 50126708 kB' 'Buffers: 7732 kB' 'Cached: 9113024 kB' 'SwapCached: 0 kB' 'Active: 6492056 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948028 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 778856 kB' 'Mapped: 143820 kB' 'Shmem: 5172512 kB' 'KReclaimable: 157664 kB' 'Slab: 443388 kB' 'SReclaimable: 157664 kB' 'SUnreclaim: 285724 kB' 'KernelStack: 12800 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.372 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.372 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.633 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.633 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.633 14:40:50 -- setup/common.sh@33 -- # echo 0 00:03:50.633 14:40:50 -- setup/common.sh@33 -- # return 0 00:03:50.633 14:40:50 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.633 14:40:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.633 nr_hugepages=1024 00:03:50.633 14:40:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.633 resv_hugepages=0 00:03:50.633 14:40:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.633 surplus_hugepages=0 00:03:50.633 14:40:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.633 anon_hugepages=0 00:03:50.633 14:40:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.633 14:40:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.633 14:40:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.633 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.633 14:40:50 -- setup/common.sh@18 -- # local node= 00:03:50.633 14:40:50 -- setup/common.sh@19 -- # local var val 00:03:50.633 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.633 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.633 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.633 14:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.634 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.634 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46592276 kB' 'MemAvailable: 50125728 kB' 'Buffers: 7732 kB' 'Cached: 9113040 kB' 'SwapCached: 0 kB' 'Active: 6492292 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948264 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779164 kB' 'Mapped: 143820 kB' 'Shmem: 5172528 kB' 'KReclaimable: 157664 kB' 'Slab: 443420 kB' 'SReclaimable: 157664 kB' 'SUnreclaim: 285756 kB' 'KernelStack: 12800 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.634 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.634 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.635 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.635 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.635 14:40:50 -- setup/common.sh@33 -- # echo 1024 00:03:50.635 14:40:50 -- setup/common.sh@33 -- # return 0 00:03:50.635 14:40:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.635 14:40:50 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.635 14:40:50 -- setup/hugepages.sh@27 -- # local node 00:03:50.635 14:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.635 14:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.635 14:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.635 14:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.635 14:40:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.635 14:40:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.635 14:40:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.635 14:40:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.635 14:40:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.636 14:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.636 14:40:50 -- setup/common.sh@18 -- # local node=0 00:03:50.636 14:40:50 -- setup/common.sh@19 -- # local var val 00:03:50.636 14:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.636 14:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.636 14:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.636 14:40:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.636 14:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.636 14:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21758368 kB' 'MemUsed: 11071516 kB' 'SwapCached: 0 kB' 'Active: 4733884 kB' 'Inactive: 3249216 kB' 'Active(anon): 4564152 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534084 kB' 'Mapped: 98520 kB' 'AnonPages: 452228 kB' 'Shmem: 4115136 kB' 'KernelStack: 6776 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100860 kB' 'Slab: 273272 kB' 'SReclaimable: 100860 kB' 'SUnreclaim: 172412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.636 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.636 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # continue 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.637 14:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.637 14:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.637 14:40:50 -- setup/common.sh@33 -- # echo 0 00:03:50.637 14:40:50 -- setup/common.sh@33 -- # return 0 00:03:50.637 14:40:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.637 14:40:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.637 14:40:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.637 14:40:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.637 14:40:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.637 node0=1024 expecting 1024 00:03:50.637 14:40:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.637 00:03:50.637 real 0m3.396s 00:03:50.637 user 0m0.622s 00:03:50.637 sys 0m0.841s 00:03:50.637 14:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.637 14:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:50.637 ************************************ 00:03:50.637 END TEST default_setup 00:03:50.637 ************************************ 00:03:50.637 14:40:50 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.637 14:40:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.637 14:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.637 14:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:50.637 ************************************ 00:03:50.637 START TEST per_node_1G_alloc 00:03:50.637 ************************************ 00:03:50.637 14:40:50 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:50.637 14:40:50 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.637 14:40:50 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:50.637 14:40:50 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.637 14:40:50 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:50.637 14:40:50 -- setup/hugepages.sh@51 -- # shift 00:03:50.637 14:40:50 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:50.637 14:40:50 -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.637 14:40:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.637 14:40:50 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.637 14:40:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:50.637 14:40:50 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:50.637 14:40:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.637 14:40:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.637 14:40:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.637 14:40:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.637 14:40:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.637 14:40:50 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:50.637 14:40:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.637 14:40:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.638 14:40:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.638 14:40:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.638 14:40:50 -- setup/hugepages.sh@73 -- # return 0 00:03:50.638 14:40:50 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.638 14:40:50 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:50.638 14:40:50 -- setup/hugepages.sh@146 -- # setup output 00:03:50.638 14:40:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.638 14:40:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.016 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.016 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.016 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.016 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.016 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.016 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.016 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.016 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.016 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:52.016 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:52.016 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:52.016 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:52.016 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:52.016 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:52.016 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:52.016 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:52.016 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:52.016 14:40:51 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.016 14:40:51 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.016 14:40:51 -- setup/hugepages.sh@89 -- # local node 00:03:52.016 14:40:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.016 14:40:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.016 14:40:51 -- setup/hugepages.sh@92 -- # local surp 00:03:52.016 14:40:51 -- setup/hugepages.sh@93 -- # local resv 00:03:52.016 14:40:51 -- setup/hugepages.sh@94 -- # local anon 00:03:52.016 14:40:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.016 14:40:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.016 14:40:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.016 14:40:51 -- setup/common.sh@18 -- # local node= 00:03:52.016 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.016 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.016 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.016 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.016 14:40:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.016 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.016 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46591216 kB' 'MemAvailable: 50124632 kB' 'Buffers: 7732 kB' 'Cached: 9113092 kB' 'SwapCached: 0 kB' 'Active: 6492968 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948940 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779668 kB' 'Mapped: 143916 kB' 'Shmem: 5172580 kB' 'KReclaimable: 157592 kB' 'Slab: 443472 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 285880 kB' 'KernelStack: 12816 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.016 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.016 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.017 14:40:51 -- setup/common.sh@33 -- # echo 0 00:03:52.017 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.017 14:40:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.017 14:40:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.017 14:40:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.017 14:40:51 -- setup/common.sh@18 -- # local node= 00:03:52.017 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.017 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.017 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.017 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.017 14:40:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.017 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.017 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46592656 kB' 'MemAvailable: 50126072 kB' 'Buffers: 7732 kB' 'Cached: 9113096 kB' 'SwapCached: 0 kB' 'Active: 6493220 kB' 'Inactive: 3404216 kB' 'Active(anon): 5949192 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779992 kB' 'Mapped: 143992 kB' 'Shmem: 5172584 kB' 'KReclaimable: 157592 kB' 'Slab: 443540 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 285948 kB' 'KernelStack: 12800 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193672 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.017 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.017 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.018 14:40:51 -- setup/common.sh@33 -- # echo 0 00:03:52.018 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.018 14:40:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.018 14:40:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.018 14:40:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.018 14:40:51 -- setup/common.sh@18 -- # local node= 00:03:52.018 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.018 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.018 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.018 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.018 14:40:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.018 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.018 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46593584 kB' 'MemAvailable: 50127000 kB' 'Buffers: 7732 kB' 'Cached: 9113104 kB' 'SwapCached: 0 kB' 'Active: 6492980 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948952 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779644 kB' 'Mapped: 143852 kB' 'Shmem: 5172592 kB' 'KReclaimable: 157592 kB' 'Slab: 443532 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 285940 kB' 'KernelStack: 12800 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193656 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.018 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.018 14:40:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.019 14:40:51 -- setup/common.sh@33 -- # echo 0 00:03:52.019 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.019 14:40:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.019 14:40:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.019 nr_hugepages=1024 00:03:52.019 14:40:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.019 resv_hugepages=0 00:03:52.019 14:40:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.019 surplus_hugepages=0 00:03:52.019 14:40:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.019 anon_hugepages=0 00:03:52.019 14:40:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.019 14:40:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.019 14:40:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.019 14:40:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.019 14:40:51 -- setup/common.sh@18 -- # local node= 00:03:52.019 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.019 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.019 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.019 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.019 14:40:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.019 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.019 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46593740 kB' 'MemAvailable: 50127156 kB' 'Buffers: 7732 kB' 'Cached: 9113120 kB' 'SwapCached: 0 kB' 'Active: 6492700 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948672 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779400 kB' 'Mapped: 143852 kB' 'Shmem: 5172608 kB' 'KReclaimable: 157592 kB' 'Slab: 443532 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 285940 kB' 'KernelStack: 12816 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7563820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193672 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.019 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.019 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.020 14:40:51 -- setup/common.sh@33 -- # echo 1024 00:03:52.020 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.020 14:40:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.020 14:40:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.020 14:40:51 -- setup/hugepages.sh@27 -- # local node 00:03:52.020 14:40:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.020 14:40:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.020 14:40:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.020 14:40:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.020 14:40:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.020 14:40:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.020 14:40:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.020 14:40:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.020 14:40:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.020 14:40:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.020 14:40:51 -- setup/common.sh@18 -- # local node=0 00:03:52.020 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.020 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.020 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.020 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.020 14:40:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.020 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.020 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22820192 kB' 'MemUsed: 10009692 kB' 'SwapCached: 0 kB' 'Active: 4736076 kB' 'Inactive: 3249216 kB' 'Active(anon): 4566344 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534148 kB' 'Mapped: 98552 kB' 'AnonPages: 454344 kB' 'Shmem: 4115200 kB' 'KernelStack: 6840 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100892 kB' 'Slab: 273556 kB' 'SReclaimable: 100892 kB' 'SUnreclaim: 172664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.020 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.020 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@33 -- # echo 0 00:03:52.021 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.021 14:40:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.021 14:40:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.021 14:40:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.021 14:40:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.021 14:40:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.021 14:40:51 -- setup/common.sh@18 -- # local node=1 00:03:52.021 14:40:51 -- setup/common.sh@19 -- # local var val 00:03:52.021 14:40:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.021 14:40:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.021 14:40:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.021 14:40:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.021 14:40:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.021 14:40:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 23774008 kB' 'MemUsed: 3937816 kB' 'SwapCached: 0 kB' 'Active: 1756656 kB' 'Inactive: 155000 kB' 'Active(anon): 1382360 kB' 'Inactive(anon): 0 kB' 'Active(file): 374296 kB' 'Inactive(file): 155000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1586720 kB' 'Mapped: 45300 kB' 'AnonPages: 325060 kB' 'Shmem: 1057424 kB' 'KernelStack: 5976 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56700 kB' 'Slab: 169936 kB' 'SReclaimable: 56700 kB' 'SUnreclaim: 113236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # continue 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.021 14:40:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.021 14:40:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.021 14:40:51 -- setup/common.sh@33 -- # echo 0 00:03:52.022 14:40:51 -- setup/common.sh@33 -- # return 0 00:03:52.022 14:40:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.022 14:40:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.022 14:40:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.022 14:40:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.022 14:40:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.022 node0=512 expecting 512 00:03:52.022 14:40:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.022 14:40:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.022 14:40:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.022 14:40:51 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.022 node1=512 expecting 512 00:03:52.022 14:40:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.022 00:03:52.022 real 0m1.367s 00:03:52.022 user 0m0.560s 00:03:52.022 sys 0m0.764s 00:03:52.022 14:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.022 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:03:52.022 ************************************ 00:03:52.022 END TEST per_node_1G_alloc 00:03:52.022 ************************************ 00:03:52.022 14:40:52 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.022 14:40:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.022 14:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.022 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:03:52.280 ************************************ 00:03:52.280 START TEST even_2G_alloc 00:03:52.280 ************************************ 00:03:52.280 14:40:52 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:52.280 14:40:52 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.280 14:40:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.280 14:40:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.280 14:40:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.280 14:40:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.280 14:40:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.280 14:40:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.280 14:40:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.280 14:40:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.280 14:40:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.280 14:40:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.280 14:40:52 -- setup/hugepages.sh@83 -- # : 512 00:03:52.280 14:40:52 -- setup/hugepages.sh@84 -- # : 1 00:03:52.280 14:40:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.280 14:40:52 -- setup/hugepages.sh@83 -- # : 0 00:03:52.280 14:40:52 -- setup/hugepages.sh@84 -- # : 0 00:03:52.280 14:40:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.280 14:40:52 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.280 14:40:52 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.280 14:40:52 -- setup/hugepages.sh@153 -- # setup output 00:03:52.280 14:40:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.280 14:40:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:53.215 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:53.215 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.215 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:53.215 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:53.215 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:53.215 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:53.215 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:53.215 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:53.215 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:53.215 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:53.215 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:53.215 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:53.215 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:53.215 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:53.474 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:53.474 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:53.474 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:53.474 14:40:53 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:53.474 14:40:53 -- setup/hugepages.sh@89 -- # local node 00:03:53.474 14:40:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.474 14:40:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.474 14:40:53 -- setup/hugepages.sh@92 -- # local surp 00:03:53.474 14:40:53 -- setup/hugepages.sh@93 -- # local resv 00:03:53.474 14:40:53 -- setup/hugepages.sh@94 -- # local anon 00:03:53.475 14:40:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.475 14:40:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.475 14:40:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.475 14:40:53 -- setup/common.sh@18 -- # local node= 00:03:53.475 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.475 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.475 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.475 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.475 14:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.475 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.475 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46575064 kB' 'MemAvailable: 50108480 kB' 'Buffers: 7732 kB' 'Cached: 9113184 kB' 'SwapCached: 0 kB' 'Active: 6495488 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951460 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 782016 kB' 'Mapped: 143904 kB' 'Shmem: 5172672 kB' 'KReclaimable: 157592 kB' 'Slab: 443760 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 286168 kB' 'KernelStack: 12848 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7564004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193736 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.475 14:40:53 -- setup/common.sh@33 -- # echo 0 00:03:53.475 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.475 14:40:53 -- setup/hugepages.sh@97 -- # anon=0 00:03:53.475 14:40:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.475 14:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.475 14:40:53 -- setup/common.sh@18 -- # local node= 00:03:53.475 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.475 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.475 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.475 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.475 14:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.475 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.475 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46575696 kB' 'MemAvailable: 50109112 kB' 'Buffers: 7732 kB' 'Cached: 9113184 kB' 'SwapCached: 0 kB' 'Active: 6496128 kB' 'Inactive: 3404216 kB' 'Active(anon): 5952100 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 782672 kB' 'Mapped: 143980 kB' 'Shmem: 5172672 kB' 'KReclaimable: 157592 kB' 'Slab: 443856 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 286264 kB' 'KernelStack: 12816 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7564016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193688 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.475 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.475 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 14:40:53 -- setup/common.sh@33 -- # echo 0 00:03:53.476 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.476 14:40:53 -- setup/hugepages.sh@99 -- # surp=0 00:03:53.476 14:40:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.476 14:40:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.476 14:40:53 -- setup/common.sh@18 -- # local node= 00:03:53.476 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.476 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.476 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.476 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.476 14:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.476 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.476 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46575696 kB' 'MemAvailable: 50109112 kB' 'Buffers: 7732 kB' 'Cached: 9113196 kB' 'SwapCached: 0 kB' 'Active: 6494844 kB' 'Inactive: 3404216 kB' 'Active(anon): 5950816 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781404 kB' 'Mapped: 143960 kB' 'Shmem: 5172684 kB' 'KReclaimable: 157592 kB' 'Slab: 443856 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 286264 kB' 'KernelStack: 12816 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7564032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193688 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.476 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.477 14:40:53 -- setup/common.sh@33 -- # echo 0 00:03:53.477 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.477 14:40:53 -- setup/hugepages.sh@100 -- # resv=0 00:03:53.477 14:40:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.477 nr_hugepages=1024 00:03:53.477 14:40:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.477 resv_hugepages=0 00:03:53.477 14:40:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.477 surplus_hugepages=0 00:03:53.477 14:40:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.477 anon_hugepages=0 00:03:53.477 14:40:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.477 14:40:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.477 14:40:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.477 14:40:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.477 14:40:53 -- setup/common.sh@18 -- # local node= 00:03:53.477 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.477 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.477 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.477 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.477 14:40:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.477 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.477 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46576732 kB' 'MemAvailable: 50110148 kB' 'Buffers: 7732 kB' 'Cached: 9113212 kB' 'SwapCached: 0 kB' 'Active: 6495092 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951064 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781580 kB' 'Mapped: 143884 kB' 'Shmem: 5172700 kB' 'KReclaimable: 157592 kB' 'Slab: 443828 kB' 'SReclaimable: 157592 kB' 'SUnreclaim: 286236 kB' 'KernelStack: 12816 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7564048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193688 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.477 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.477 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.478 14:40:53 -- setup/common.sh@33 -- # echo 1024 00:03:53.478 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.478 14:40:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.478 14:40:53 -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.478 14:40:53 -- setup/hugepages.sh@27 -- # local node 00:03:53.478 14:40:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.478 14:40:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.478 14:40:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.478 14:40:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.478 14:40:53 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.478 14:40:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.478 14:40:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.478 14:40:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.478 14:40:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.478 14:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.478 14:40:53 -- setup/common.sh@18 -- # local node=0 00:03:53.478 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.478 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.478 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.478 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.478 14:40:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.478 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.478 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22814312 kB' 'MemUsed: 10015572 kB' 'SwapCached: 0 kB' 'Active: 4739328 kB' 'Inactive: 3249216 kB' 'Active(anon): 4569596 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534236 kB' 'Mapped: 98584 kB' 'AnonPages: 457464 kB' 'Shmem: 4115288 kB' 'KernelStack: 6840 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100892 kB' 'Slab: 273824 kB' 'SReclaimable: 100892 kB' 'SUnreclaim: 172932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.478 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.478 14:40:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@33 -- # echo 0 00:03:53.738 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.738 14:40:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.738 14:40:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.738 14:40:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.738 14:40:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.738 14:40:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.738 14:40:53 -- setup/common.sh@18 -- # local node=1 00:03:53.738 14:40:53 -- setup/common.sh@19 -- # local var val 00:03:53.738 14:40:53 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.738 14:40:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.738 14:40:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.738 14:40:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.738 14:40:53 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.738 14:40:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 23762896 kB' 'MemUsed: 3948928 kB' 'SwapCached: 0 kB' 'Active: 1756900 kB' 'Inactive: 155000 kB' 'Active(anon): 1382604 kB' 'Inactive(anon): 0 kB' 'Active(file): 374296 kB' 'Inactive(file): 155000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1586720 kB' 'Mapped: 45300 kB' 'AnonPages: 325200 kB' 'Shmem: 1057424 kB' 'KernelStack: 5992 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56700 kB' 'Slab: 170004 kB' 'SReclaimable: 56700 kB' 'SUnreclaim: 113304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.738 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.738 14:40:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # continue 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.739 14:40:53 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.739 14:40:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.739 14:40:53 -- setup/common.sh@33 -- # echo 0 00:03:53.739 14:40:53 -- setup/common.sh@33 -- # return 0 00:03:53.739 14:40:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.739 14:40:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.739 14:40:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.739 14:40:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.739 14:40:53 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.739 node0=512 expecting 512 00:03:53.739 14:40:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.739 14:40:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.739 14:40:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.739 14:40:53 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:53.739 node1=512 expecting 512 00:03:53.739 14:40:53 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.739 00:03:53.739 real 0m1.479s 00:03:53.739 user 0m0.609s 00:03:53.739 sys 0m0.835s 00:03:53.739 14:40:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:53.740 14:40:53 -- common/autotest_common.sh@10 -- # set +x 00:03:53.740 ************************************ 00:03:53.740 END TEST even_2G_alloc 00:03:53.740 ************************************ 00:03:53.740 14:40:53 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:53.740 14:40:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.740 14:40:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.740 14:40:53 -- common/autotest_common.sh@10 -- # set +x 00:03:53.740 ************************************ 00:03:53.740 START TEST odd_alloc 00:03:53.740 ************************************ 00:03:53.740 14:40:53 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:53.740 14:40:53 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:53.740 14:40:53 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:53.740 14:40:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:53.740 14:40:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.740 14:40:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.740 14:40:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.740 14:40:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:53.740 14:40:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.740 14:40:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.740 14:40:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.740 14:40:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.740 14:40:53 -- setup/hugepages.sh@83 -- # : 513 00:03:53.740 14:40:53 -- setup/hugepages.sh@84 -- # : 1 00:03:53.740 14:40:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:53.740 14:40:53 -- setup/hugepages.sh@83 -- # : 0 00:03:53.740 14:40:53 -- setup/hugepages.sh@84 -- # : 0 00:03:53.740 14:40:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.740 14:40:53 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:53.740 14:40:53 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:53.740 14:40:53 -- setup/hugepages.sh@160 -- # setup output 00:03:53.740 14:40:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.740 14:40:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:55.127 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.127 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.127 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.127 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.128 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.128 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.128 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.128 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.128 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.128 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:55.128 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:55.128 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:55.128 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:55.128 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:55.128 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:55.128 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:55.128 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:55.128 14:40:54 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.128 14:40:54 -- setup/hugepages.sh@89 -- # local node 00:03:55.128 14:40:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.128 14:40:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.128 14:40:54 -- setup/hugepages.sh@92 -- # local surp 00:03:55.128 14:40:54 -- setup/hugepages.sh@93 -- # local resv 00:03:55.128 14:40:54 -- setup/hugepages.sh@94 -- # local anon 00:03:55.128 14:40:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.128 14:40:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.128 14:40:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.128 14:40:54 -- setup/common.sh@18 -- # local node= 00:03:55.128 14:40:54 -- setup/common.sh@19 -- # local var val 00:03:55.128 14:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.128 14:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.128 14:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.128 14:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.128 14:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.128 14:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46594180 kB' 'MemAvailable: 50127556 kB' 'Buffers: 7732 kB' 'Cached: 9113284 kB' 'SwapCached: 0 kB' 'Active: 6493472 kB' 'Inactive: 3404216 kB' 'Active(anon): 5949444 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779996 kB' 'Mapped: 142856 kB' 'Shmem: 5172772 kB' 'KReclaimable: 157512 kB' 'Slab: 443520 kB' 'SReclaimable: 157512 kB' 'SUnreclaim: 286008 kB' 'KernelStack: 12736 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7546068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193752 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:54 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.128 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.128 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.129 14:40:55 -- setup/common.sh@33 -- # echo 0 00:03:55.129 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.129 14:40:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.129 14:40:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.129 14:40:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.129 14:40:55 -- setup/common.sh@18 -- # local node= 00:03:55.129 14:40:55 -- setup/common.sh@19 -- # local var val 00:03:55.129 14:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.129 14:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.129 14:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.129 14:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.129 14:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.129 14:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46594940 kB' 'MemAvailable: 50128316 kB' 'Buffers: 7732 kB' 'Cached: 9113284 kB' 'SwapCached: 0 kB' 'Active: 6494240 kB' 'Inactive: 3404216 kB' 'Active(anon): 5950212 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 780780 kB' 'Mapped: 142932 kB' 'Shmem: 5172772 kB' 'KReclaimable: 157512 kB' 'Slab: 443540 kB' 'SReclaimable: 157512 kB' 'SUnreclaim: 286028 kB' 'KernelStack: 12752 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7546080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.129 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.129 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.130 14:40:55 -- setup/common.sh@33 -- # echo 0 00:03:55.130 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.130 14:40:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.130 14:40:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.130 14:40:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.130 14:40:55 -- setup/common.sh@18 -- # local node= 00:03:55.130 14:40:55 -- setup/common.sh@19 -- # local var val 00:03:55.130 14:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.130 14:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.130 14:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.130 14:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.130 14:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.130 14:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.130 14:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46595488 kB' 'MemAvailable: 50128864 kB' 'Buffers: 7732 kB' 'Cached: 9113288 kB' 'SwapCached: 0 kB' 'Active: 6492892 kB' 'Inactive: 3404216 kB' 'Active(anon): 5948864 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779404 kB' 'Mapped: 142848 kB' 'Shmem: 5172776 kB' 'KReclaimable: 157512 kB' 'Slab: 443500 kB' 'SReclaimable: 157512 kB' 'SUnreclaim: 285988 kB' 'KernelStack: 12736 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7546092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.130 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.130 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.131 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.131 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.132 14:40:55 -- setup/common.sh@33 -- # echo 0 00:03:55.132 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.132 14:40:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.132 14:40:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.132 nr_hugepages=1025 00:03:55.132 14:40:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.132 resv_hugepages=0 00:03:55.132 14:40:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.132 surplus_hugepages=0 00:03:55.132 14:40:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.132 anon_hugepages=0 00:03:55.132 14:40:55 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.132 14:40:55 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.132 14:40:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.132 14:40:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.132 14:40:55 -- setup/common.sh@18 -- # local node= 00:03:55.132 14:40:55 -- setup/common.sh@19 -- # local var val 00:03:55.132 14:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.132 14:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.132 14:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.132 14:40:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.132 14:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.132 14:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46595488 kB' 'MemAvailable: 50128864 kB' 'Buffers: 7732 kB' 'Cached: 9113316 kB' 'SwapCached: 0 kB' 'Active: 6493208 kB' 'Inactive: 3404216 kB' 'Active(anon): 5949180 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 779684 kB' 'Mapped: 142848 kB' 'Shmem: 5172804 kB' 'KReclaimable: 157512 kB' 'Slab: 443500 kB' 'SReclaimable: 157512 kB' 'SUnreclaim: 285988 kB' 'KernelStack: 12736 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7546108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.132 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.132 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.133 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.133 14:40:55 -- setup/common.sh@33 -- # echo 1025 00:03:55.133 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.133 14:40:55 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.133 14:40:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.133 14:40:55 -- setup/hugepages.sh@27 -- # local node 00:03:55.133 14:40:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.133 14:40:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.133 14:40:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.133 14:40:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.133 14:40:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.133 14:40:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.133 14:40:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.133 14:40:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.133 14:40:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.133 14:40:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.133 14:40:55 -- setup/common.sh@18 -- # local node=0 00:03:55.133 14:40:55 -- setup/common.sh@19 -- # local var val 00:03:55.133 14:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.133 14:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.133 14:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.133 14:40:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.133 14:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.133 14:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.133 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22829836 kB' 'MemUsed: 10000048 kB' 'SwapCached: 0 kB' 'Active: 4741768 kB' 'Inactive: 3249216 kB' 'Active(anon): 4572036 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534312 kB' 'Mapped: 98084 kB' 'AnonPages: 459940 kB' 'Shmem: 4115364 kB' 'KernelStack: 6856 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100892 kB' 'Slab: 273396 kB' 'SReclaimable: 100892 kB' 'SUnreclaim: 172504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.134 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.134 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.134 14:40:55 -- setup/common.sh@33 -- # echo 0 00:03:55.134 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.134 14:40:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.134 14:40:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.134 14:40:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.134 14:40:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.135 14:40:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.135 14:40:55 -- setup/common.sh@18 -- # local node=1 00:03:55.135 14:40:55 -- setup/common.sh@19 -- # local var val 00:03:55.135 14:40:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.135 14:40:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.135 14:40:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.135 14:40:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.135 14:40:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.135 14:40:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 23764896 kB' 'MemUsed: 3946928 kB' 'SwapCached: 0 kB' 'Active: 1753460 kB' 'Inactive: 155000 kB' 'Active(anon): 1379164 kB' 'Inactive(anon): 0 kB' 'Active(file): 374296 kB' 'Inactive(file): 155000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1586748 kB' 'Mapped: 45200 kB' 'AnonPages: 321760 kB' 'Shmem: 1057452 kB' 'KernelStack: 5864 kB' 'PageTables: 2740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56620 kB' 'Slab: 170104 kB' 'SReclaimable: 56620 kB' 'SUnreclaim: 113484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.135 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.135 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.136 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.136 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.136 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.136 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.136 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.136 14:40:55 -- setup/common.sh@32 -- # continue 00:03:55.136 14:40:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.136 14:40:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.136 14:40:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.136 14:40:55 -- setup/common.sh@33 -- # echo 0 00:03:55.136 14:40:55 -- setup/common.sh@33 -- # return 0 00:03:55.136 14:40:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.136 14:40:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.136 14:40:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.136 14:40:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.136 14:40:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.136 node0=512 expecting 513 00:03:55.136 14:40:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.136 14:40:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.136 14:40:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.136 14:40:55 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.136 node1=513 expecting 512 00:03:55.136 14:40:55 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.136 00:03:55.136 real 0m1.415s 00:03:55.136 user 0m0.587s 00:03:55.136 sys 0m0.790s 00:03:55.136 14:40:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.136 14:40:55 -- common/autotest_common.sh@10 -- # set +x 00:03:55.136 ************************************ 00:03:55.136 END TEST odd_alloc 00:03:55.136 ************************************ 00:03:55.136 14:40:55 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.136 14:40:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.136 14:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.136 14:40:55 -- common/autotest_common.sh@10 -- # set +x 00:03:55.396 ************************************ 00:03:55.396 START TEST custom_alloc 00:03:55.396 ************************************ 00:03:55.396 14:40:55 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:55.396 14:40:55 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.396 14:40:55 -- setup/hugepages.sh@169 -- # local node 00:03:55.396 14:40:55 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.396 14:40:55 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.396 14:40:55 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.396 14:40:55 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.397 14:40:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.397 14:40:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.397 14:40:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.397 14:40:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.397 14:40:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.397 14:40:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.397 14:40:55 -- setup/hugepages.sh@83 -- # : 256 00:03:55.397 14:40:55 -- setup/hugepages.sh@84 -- # : 1 00:03:55.397 14:40:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.397 14:40:55 -- setup/hugepages.sh@83 -- # : 0 00:03:55.397 14:40:55 -- setup/hugepages.sh@84 -- # : 0 00:03:55.397 14:40:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.397 14:40:55 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.397 14:40:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.397 14:40:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.397 14:40:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.397 14:40:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.397 14:40:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.397 14:40:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.397 14:40:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.397 14:40:55 -- setup/hugepages.sh@78 -- # return 0 00:03:55.397 14:40:55 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.397 14:40:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.397 14:40:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.397 14:40:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.397 14:40:55 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.397 14:40:55 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.397 14:40:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.397 14:40:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.397 14:40:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.397 14:40:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.397 14:40:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.397 14:40:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.397 14:40:55 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.397 14:40:55 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.397 14:40:55 -- setup/hugepages.sh@78 -- # return 0 00:03:55.397 14:40:55 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.397 14:40:55 -- setup/hugepages.sh@187 -- # setup output 00:03:55.397 14:40:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.397 14:40:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:56.332 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.332 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.332 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.332 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.332 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.332 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.332 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.332 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.332 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.332 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:56.332 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:56.332 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:56.332 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:56.332 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:56.332 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:56.332 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:56.332 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:56.596 14:40:56 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:56.596 14:40:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:56.596 14:40:56 -- setup/hugepages.sh@89 -- # local node 00:03:56.596 14:40:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.596 14:40:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.596 14:40:56 -- setup/hugepages.sh@92 -- # local surp 00:03:56.596 14:40:56 -- setup/hugepages.sh@93 -- # local resv 00:03:56.596 14:40:56 -- setup/hugepages.sh@94 -- # local anon 00:03:56.596 14:40:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.596 14:40:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.596 14:40:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.596 14:40:56 -- setup/common.sh@18 -- # local node= 00:03:56.596 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.596 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.596 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.596 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.596 14:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.597 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.597 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45529184 kB' 'MemAvailable: 49062556 kB' 'Buffers: 7732 kB' 'Cached: 9113384 kB' 'SwapCached: 0 kB' 'Active: 6495048 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951020 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781440 kB' 'Mapped: 142892 kB' 'Shmem: 5172872 kB' 'KReclaimable: 157504 kB' 'Slab: 443528 kB' 'SReclaimable: 157504 kB' 'SUnreclaim: 286024 kB' 'KernelStack: 12704 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7546128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193736 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.597 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.597 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.598 14:40:56 -- setup/common.sh@33 -- # echo 0 00:03:56.598 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.598 14:40:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.598 14:40:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.598 14:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.598 14:40:56 -- setup/common.sh@18 -- # local node= 00:03:56.598 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.598 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.598 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.598 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.598 14:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.598 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.598 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45530252 kB' 'MemAvailable: 49063624 kB' 'Buffers: 7732 kB' 'Cached: 9113388 kB' 'SwapCached: 0 kB' 'Active: 6495892 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951864 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 782292 kB' 'Mapped: 142960 kB' 'Shmem: 5172876 kB' 'KReclaimable: 157504 kB' 'Slab: 443584 kB' 'SReclaimable: 157504 kB' 'SUnreclaim: 286080 kB' 'KernelStack: 12752 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7546136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.598 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.598 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.599 14:40:56 -- setup/common.sh@33 -- # echo 0 00:03:56.599 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.599 14:40:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.599 14:40:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.599 14:40:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.599 14:40:56 -- setup/common.sh@18 -- # local node= 00:03:56.599 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.599 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.599 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.599 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.599 14:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.599 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.599 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45530724 kB' 'MemAvailable: 49064096 kB' 'Buffers: 7732 kB' 'Cached: 9113400 kB' 'SwapCached: 0 kB' 'Active: 6495500 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951472 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781820 kB' 'Mapped: 142884 kB' 'Shmem: 5172888 kB' 'KReclaimable: 157504 kB' 'Slab: 443564 kB' 'SReclaimable: 157504 kB' 'SUnreclaim: 286060 kB' 'KernelStack: 12736 kB' 'PageTables: 7708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7546152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.599 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.599 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.600 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.600 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.601 14:40:56 -- setup/common.sh@33 -- # echo 0 00:03:56.601 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.601 14:40:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.601 14:40:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:56.601 nr_hugepages=1536 00:03:56.601 14:40:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.601 resv_hugepages=0 00:03:56.601 14:40:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.601 surplus_hugepages=0 00:03:56.601 14:40:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.601 anon_hugepages=0 00:03:56.601 14:40:56 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:56.601 14:40:56 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:56.601 14:40:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.601 14:40:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.601 14:40:56 -- setup/common.sh@18 -- # local node= 00:03:56.601 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.601 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.601 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.601 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.601 14:40:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.601 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.601 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45530724 kB' 'MemAvailable: 49064096 kB' 'Buffers: 7732 kB' 'Cached: 9113416 kB' 'SwapCached: 0 kB' 'Active: 6495280 kB' 'Inactive: 3404216 kB' 'Active(anon): 5951252 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 781580 kB' 'Mapped: 142884 kB' 'Shmem: 5172904 kB' 'KReclaimable: 157504 kB' 'Slab: 443564 kB' 'SReclaimable: 157504 kB' 'SUnreclaim: 286060 kB' 'KernelStack: 12720 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7546168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193704 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.601 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.601 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.602 14:40:56 -- setup/common.sh@33 -- # echo 1536 00:03:56.602 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.602 14:40:56 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:56.602 14:40:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.602 14:40:56 -- setup/hugepages.sh@27 -- # local node 00:03:56.602 14:40:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.602 14:40:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.602 14:40:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.602 14:40:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.602 14:40:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.602 14:40:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.602 14:40:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.602 14:40:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.602 14:40:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.602 14:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.602 14:40:56 -- setup/common.sh@18 -- # local node=0 00:03:56.602 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.602 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.602 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.602 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.602 14:40:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.602 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.602 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22822208 kB' 'MemUsed: 10007676 kB' 'SwapCached: 0 kB' 'Active: 4741416 kB' 'Inactive: 3249216 kB' 'Active(anon): 4571684 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534412 kB' 'Mapped: 97680 kB' 'AnonPages: 459420 kB' 'Shmem: 4115464 kB' 'KernelStack: 6888 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100884 kB' 'Slab: 273328 kB' 'SReclaimable: 100884 kB' 'SUnreclaim: 172444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.602 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.602 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@33 -- # echo 0 00:03:56.603 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.603 14:40:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.603 14:40:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.603 14:40:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.603 14:40:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.603 14:40:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.603 14:40:56 -- setup/common.sh@18 -- # local node=1 00:03:56.603 14:40:56 -- setup/common.sh@19 -- # local var val 00:03:56.603 14:40:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.603 14:40:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.603 14:40:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.603 14:40:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.603 14:40:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.603 14:40:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 22709960 kB' 'MemUsed: 5001864 kB' 'SwapCached: 0 kB' 'Active: 1754496 kB' 'Inactive: 155000 kB' 'Active(anon): 1380200 kB' 'Inactive(anon): 0 kB' 'Active(file): 374296 kB' 'Inactive(file): 155000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1586748 kB' 'Mapped: 45204 kB' 'AnonPages: 322860 kB' 'Shmem: 1057452 kB' 'KernelStack: 5896 kB' 'PageTables: 2848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56620 kB' 'Slab: 170236 kB' 'SReclaimable: 56620 kB' 'SUnreclaim: 113616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.603 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.603 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.604 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.604 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.863 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.863 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # continue 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.864 14:40:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.864 14:40:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.864 14:40:56 -- setup/common.sh@33 -- # echo 0 00:03:56.864 14:40:56 -- setup/common.sh@33 -- # return 0 00:03:56.864 14:40:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.864 14:40:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.864 14:40:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.864 14:40:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.864 14:40:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.864 node0=512 expecting 512 00:03:56.864 14:40:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.864 14:40:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.864 14:40:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.864 14:40:56 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:56.864 node1=1024 expecting 1024 00:03:56.864 14:40:56 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:56.864 00:03:56.864 real 0m1.439s 00:03:56.864 user 0m0.600s 00:03:56.864 sys 0m0.803s 00:03:56.864 14:40:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.864 14:40:56 -- common/autotest_common.sh@10 -- # set +x 00:03:56.864 ************************************ 00:03:56.864 END TEST custom_alloc 00:03:56.864 ************************************ 00:03:56.864 14:40:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:56.864 14:40:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.864 14:40:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.864 14:40:56 -- common/autotest_common.sh@10 -- # set +x 00:03:56.864 ************************************ 00:03:56.864 START TEST no_shrink_alloc 00:03:56.864 ************************************ 00:03:56.864 14:40:56 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:56.864 14:40:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:56.864 14:40:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.864 14:40:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:56.864 14:40:56 -- setup/hugepages.sh@51 -- # shift 00:03:56.864 14:40:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:56.864 14:40:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.864 14:40:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.864 14:40:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.864 14:40:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:56.864 14:40:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:56.864 14:40:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.864 14:40:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.864 14:40:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.864 14:40:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.864 14:40:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.864 14:40:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:56.864 14:40:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.864 14:40:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:56.864 14:40:56 -- setup/hugepages.sh@73 -- # return 0 00:03:56.864 14:40:56 -- setup/hugepages.sh@198 -- # setup output 00:03:56.864 14:40:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.864 14:40:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:58.248 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.248 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.248 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.248 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.248 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.248 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.248 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.248 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.248 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.248 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:58.248 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:58.248 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:58.248 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:58.248 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:58.248 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:58.248 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:58.248 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:58.248 14:40:58 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:58.248 14:40:58 -- setup/hugepages.sh@89 -- # local node 00:03:58.248 14:40:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.248 14:40:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.248 14:40:58 -- setup/hugepages.sh@92 -- # local surp 00:03:58.248 14:40:58 -- setup/hugepages.sh@93 -- # local resv 00:03:58.248 14:40:58 -- setup/hugepages.sh@94 -- # local anon 00:03:58.248 14:40:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.248 14:40:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.248 14:40:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.248 14:40:58 -- setup/common.sh@18 -- # local node= 00:03:58.249 14:40:58 -- setup/common.sh@19 -- # local var val 00:03:58.249 14:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.249 14:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.249 14:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.249 14:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.249 14:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.249 14:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46547244 kB' 'MemAvailable: 50080632 kB' 'Buffers: 7732 kB' 'Cached: 9113484 kB' 'SwapCached: 0 kB' 'Active: 6498004 kB' 'Inactive: 3404216 kB' 'Active(anon): 5953976 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 784288 kB' 'Mapped: 142984 kB' 'Shmem: 5172972 kB' 'KReclaimable: 157536 kB' 'Slab: 443844 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286308 kB' 'KernelStack: 12752 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193672 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.249 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.249 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.250 14:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.250 14:40:58 -- setup/common.sh@33 -- # echo 0 00:03:58.250 14:40:58 -- setup/common.sh@33 -- # return 0 00:03:58.250 14:40:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:58.250 14:40:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.250 14:40:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.250 14:40:58 -- setup/common.sh@18 -- # local node= 00:03:58.250 14:40:58 -- setup/common.sh@19 -- # local var val 00:03:58.250 14:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.250 14:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.250 14:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.250 14:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.250 14:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.250 14:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.250 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46547200 kB' 'MemAvailable: 50080588 kB' 'Buffers: 7732 kB' 'Cached: 9113484 kB' 'SwapCached: 0 kB' 'Active: 6498376 kB' 'Inactive: 3404216 kB' 'Active(anon): 5954348 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 784784 kB' 'Mapped: 143060 kB' 'Shmem: 5172972 kB' 'KReclaimable: 157536 kB' 'Slab: 443892 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286356 kB' 'KernelStack: 12720 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193624 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.251 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.251 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.252 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.252 14:40:58 -- setup/common.sh@33 -- # echo 0 00:03:58.252 14:40:58 -- setup/common.sh@33 -- # return 0 00:03:58.252 14:40:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:58.252 14:40:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.252 14:40:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.252 14:40:58 -- setup/common.sh@18 -- # local node= 00:03:58.252 14:40:58 -- setup/common.sh@19 -- # local var val 00:03:58.252 14:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.252 14:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.252 14:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.252 14:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.252 14:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.252 14:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.252 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46547200 kB' 'MemAvailable: 50080588 kB' 'Buffers: 7732 kB' 'Cached: 9113496 kB' 'SwapCached: 0 kB' 'Active: 6497676 kB' 'Inactive: 3404216 kB' 'Active(anon): 5953648 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 783912 kB' 'Mapped: 142912 kB' 'Shmem: 5172984 kB' 'KReclaimable: 157536 kB' 'Slab: 443876 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286340 kB' 'KernelStack: 12704 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193624 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.253 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.253 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.254 14:40:58 -- setup/common.sh@33 -- # echo 0 00:03:58.254 14:40:58 -- setup/common.sh@33 -- # return 0 00:03:58.254 14:40:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:58.254 14:40:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.254 nr_hugepages=1024 00:03:58.254 14:40:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.254 resv_hugepages=0 00:03:58.254 14:40:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.254 surplus_hugepages=0 00:03:58.254 14:40:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.254 anon_hugepages=0 00:03:58.254 14:40:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.254 14:40:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.254 14:40:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.254 14:40:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.254 14:40:58 -- setup/common.sh@18 -- # local node= 00:03:58.254 14:40:58 -- setup/common.sh@19 -- # local var val 00:03:58.254 14:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.254 14:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.254 14:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.254 14:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.254 14:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.254 14:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46547636 kB' 'MemAvailable: 50081024 kB' 'Buffers: 7732 kB' 'Cached: 9113512 kB' 'SwapCached: 0 kB' 'Active: 6497636 kB' 'Inactive: 3404216 kB' 'Active(anon): 5953608 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 783848 kB' 'Mapped: 142912 kB' 'Shmem: 5173000 kB' 'KReclaimable: 157536 kB' 'Slab: 443876 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286340 kB' 'KernelStack: 12672 kB' 'PageTables: 7468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7547464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193640 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.254 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.254 14:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.255 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.255 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.256 14:40:58 -- setup/common.sh@33 -- # echo 1024 00:03:58.256 14:40:58 -- setup/common.sh@33 -- # return 0 00:03:58.256 14:40:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.256 14:40:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.256 14:40:58 -- setup/hugepages.sh@27 -- # local node 00:03:58.256 14:40:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.256 14:40:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.256 14:40:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.256 14:40:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.256 14:40:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.256 14:40:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.256 14:40:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.256 14:40:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.256 14:40:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.256 14:40:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.256 14:40:58 -- setup/common.sh@18 -- # local node=0 00:03:58.256 14:40:58 -- setup/common.sh@19 -- # local var val 00:03:58.256 14:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.256 14:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.256 14:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.256 14:40:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.256 14:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.256 14:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21778856 kB' 'MemUsed: 11051028 kB' 'SwapCached: 0 kB' 'Active: 4742372 kB' 'Inactive: 3249216 kB' 'Active(anon): 4572640 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534500 kB' 'Mapped: 97708 kB' 'AnonPages: 460272 kB' 'Shmem: 4115552 kB' 'KernelStack: 6760 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100916 kB' 'Slab: 273564 kB' 'SReclaimable: 100916 kB' 'SUnreclaim: 172648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.256 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.256 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # continue 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.257 14:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.257 14:40:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.257 14:40:58 -- setup/common.sh@33 -- # echo 0 00:03:58.257 14:40:58 -- setup/common.sh@33 -- # return 0 00:03:58.257 14:40:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.257 14:40:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.257 14:40:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.257 14:40:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.257 14:40:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.257 node0=1024 expecting 1024 00:03:58.257 14:40:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.257 14:40:58 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.257 14:40:58 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.257 14:40:58 -- setup/hugepages.sh@202 -- # setup output 00:03:58.257 14:40:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.257 14:40:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:59.665 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.665 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.665 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.665 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.665 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.665 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.665 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.665 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.665 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.665 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.665 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.665 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.665 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.665 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.665 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.665 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.665 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.665 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.665 14:40:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:59.665 14:40:59 -- setup/hugepages.sh@89 -- # local node 00:03:59.665 14:40:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.665 14:40:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.665 14:40:59 -- setup/hugepages.sh@92 -- # local surp 00:03:59.665 14:40:59 -- setup/hugepages.sh@93 -- # local resv 00:03:59.665 14:40:59 -- setup/hugepages.sh@94 -- # local anon 00:03:59.665 14:40:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.665 14:40:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.665 14:40:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.665 14:40:59 -- setup/common.sh@18 -- # local node= 00:03:59.665 14:40:59 -- setup/common.sh@19 -- # local var val 00:03:59.665 14:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.665 14:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.665 14:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.665 14:40:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.665 14:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.665 14:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46531240 kB' 'MemAvailable: 50064628 kB' 'Buffers: 7732 kB' 'Cached: 9113560 kB' 'SwapCached: 0 kB' 'Active: 6499856 kB' 'Inactive: 3404216 kB' 'Active(anon): 5955828 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 786052 kB' 'Mapped: 142956 kB' 'Shmem: 5173048 kB' 'KReclaimable: 157536 kB' 'Slab: 443960 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286424 kB' 'KernelStack: 12736 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193784 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.665 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.665 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.666 14:40:59 -- setup/common.sh@33 -- # echo 0 00:03:59.666 14:40:59 -- setup/common.sh@33 -- # return 0 00:03:59.666 14:40:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.666 14:40:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.666 14:40:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.666 14:40:59 -- setup/common.sh@18 -- # local node= 00:03:59.666 14:40:59 -- setup/common.sh@19 -- # local var val 00:03:59.666 14:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.666 14:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.666 14:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.666 14:40:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.666 14:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.666 14:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.666 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.666 14:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46536900 kB' 'MemAvailable: 50070288 kB' 'Buffers: 7732 kB' 'Cached: 9113560 kB' 'SwapCached: 0 kB' 'Active: 6500024 kB' 'Inactive: 3404216 kB' 'Active(anon): 5955996 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 786152 kB' 'Mapped: 142940 kB' 'Shmem: 5173048 kB' 'KReclaimable: 157536 kB' 'Slab: 443952 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286416 kB' 'KernelStack: 12736 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193752 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.667 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.667 14:40:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.668 14:40:59 -- setup/common.sh@33 -- # echo 0 00:03:59.668 14:40:59 -- setup/common.sh@33 -- # return 0 00:03:59.668 14:40:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.668 14:40:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.668 14:40:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.668 14:40:59 -- setup/common.sh@18 -- # local node= 00:03:59.668 14:40:59 -- setup/common.sh@19 -- # local var val 00:03:59.668 14:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.668 14:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.668 14:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.668 14:40:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.668 14:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.668 14:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46537496 kB' 'MemAvailable: 50070884 kB' 'Buffers: 7732 kB' 'Cached: 9113576 kB' 'SwapCached: 0 kB' 'Active: 6499488 kB' 'Inactive: 3404216 kB' 'Active(anon): 5955460 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 785636 kB' 'Mapped: 142920 kB' 'Shmem: 5173064 kB' 'KReclaimable: 157536 kB' 'Slab: 443936 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286400 kB' 'KernelStack: 12768 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193736 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.668 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.668 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.669 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.669 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.670 14:40:59 -- setup/common.sh@33 -- # echo 0 00:03:59.670 14:40:59 -- setup/common.sh@33 -- # return 0 00:03:59.670 14:40:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.670 14:40:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.670 nr_hugepages=1024 00:03:59.670 14:40:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.670 resv_hugepages=0 00:03:59.670 14:40:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.670 surplus_hugepages=0 00:03:59.670 14:40:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.670 anon_hugepages=0 00:03:59.670 14:40:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.670 14:40:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.670 14:40:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.670 14:40:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.670 14:40:59 -- setup/common.sh@18 -- # local node= 00:03:59.670 14:40:59 -- setup/common.sh@19 -- # local var val 00:03:59.670 14:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.670 14:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.670 14:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.670 14:40:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.670 14:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.670 14:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46539280 kB' 'MemAvailable: 50072668 kB' 'Buffers: 7732 kB' 'Cached: 9113592 kB' 'SwapCached: 0 kB' 'Active: 6499500 kB' 'Inactive: 3404216 kB' 'Active(anon): 5955472 kB' 'Inactive(anon): 0 kB' 'Active(file): 544028 kB' 'Inactive(file): 3404216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 785636 kB' 'Mapped: 142920 kB' 'Shmem: 5173080 kB' 'KReclaimable: 157536 kB' 'Slab: 443936 kB' 'SReclaimable: 157536 kB' 'SUnreclaim: 286400 kB' 'KernelStack: 12768 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7546840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 193736 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 458332 kB' 'DirectMap2M: 10995712 kB' 'DirectMap1G: 57671680 kB' 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.670 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.670 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.671 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.671 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.672 14:40:59 -- setup/common.sh@33 -- # echo 1024 00:03:59.672 14:40:59 -- setup/common.sh@33 -- # return 0 00:03:59.672 14:40:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.672 14:40:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.672 14:40:59 -- setup/hugepages.sh@27 -- # local node 00:03:59.672 14:40:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.672 14:40:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.672 14:40:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.672 14:40:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.672 14:40:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.672 14:40:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.672 14:40:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.672 14:40:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.672 14:40:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.672 14:40:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.672 14:40:59 -- setup/common.sh@18 -- # local node=0 00:03:59.672 14:40:59 -- setup/common.sh@19 -- # local var val 00:03:59.672 14:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.672 14:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.672 14:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.672 14:40:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.672 14:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.672 14:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21773796 kB' 'MemUsed: 11056088 kB' 'SwapCached: 0 kB' 'Active: 4744660 kB' 'Inactive: 3249216 kB' 'Active(anon): 4574928 kB' 'Inactive(anon): 0 kB' 'Active(file): 169732 kB' 'Inactive(file): 3249216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7534580 kB' 'Mapped: 97716 kB' 'AnonPages: 462528 kB' 'Shmem: 4115632 kB' 'KernelStack: 6824 kB' 'PageTables: 4800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100916 kB' 'Slab: 273596 kB' 'SReclaimable: 100916 kB' 'SUnreclaim: 172680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.672 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.672 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # continue 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.673 14:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.673 14:40:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.673 14:40:59 -- setup/common.sh@33 -- # echo 0 00:03:59.673 14:40:59 -- setup/common.sh@33 -- # return 0 00:03:59.673 14:40:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.673 14:40:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.673 14:40:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.673 14:40:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.673 14:40:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.673 node0=1024 expecting 1024 00:03:59.673 14:40:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.673 00:03:59.673 real 0m2.834s 00:03:59.673 user 0m1.170s 00:03:59.673 sys 0m1.585s 00:03:59.673 14:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.673 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:59.673 ************************************ 00:03:59.673 END TEST no_shrink_alloc 00:03:59.673 ************************************ 00:03:59.673 14:40:59 -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.673 14:40:59 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.673 14:40:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.673 14:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.673 14:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.673 14:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.673 14:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.673 14:40:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.673 14:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.673 14:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.673 14:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.673 14:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.673 14:40:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.673 14:40:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.673 00:03:59.673 real 0m12.760s 00:03:59.673 user 0m4.484s 00:03:59.673 sys 0m6.063s 00:03:59.673 14:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.673 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:59.673 ************************************ 00:03:59.673 END TEST hugepages 00:03:59.673 ************************************ 00:03:59.673 14:40:59 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:59.673 14:40:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.673 14:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.673 14:40:59 -- common/autotest_common.sh@10 -- # set +x 00:03:59.932 ************************************ 00:03:59.932 START TEST driver 00:03:59.932 ************************************ 00:03:59.932 14:40:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:59.932 * Looking for test storage... 00:03:59.932 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:59.932 14:40:59 -- setup/driver.sh@68 -- # setup reset 00:03:59.932 14:40:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.932 14:40:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.470 14:41:02 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:02.470 14:41:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.470 14:41:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.470 14:41:02 -- common/autotest_common.sh@10 -- # set +x 00:04:02.470 ************************************ 00:04:02.470 START TEST guess_driver 00:04:02.470 ************************************ 00:04:02.470 14:41:02 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:02.470 14:41:02 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:02.470 14:41:02 -- setup/driver.sh@47 -- # local fail=0 00:04:02.470 14:41:02 -- setup/driver.sh@49 -- # pick_driver 00:04:02.470 14:41:02 -- setup/driver.sh@36 -- # vfio 00:04:02.470 14:41:02 -- setup/driver.sh@21 -- # local iommu_grups 00:04:02.470 14:41:02 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:02.470 14:41:02 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:02.470 14:41:02 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:02.470 14:41:02 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:02.470 14:41:02 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:02.470 14:41:02 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:02.470 14:41:02 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:02.470 14:41:02 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:02.470 14:41:02 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:02.470 14:41:02 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:02.470 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:02.470 14:41:02 -- setup/driver.sh@30 -- # return 0 00:04:02.470 14:41:02 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:02.470 14:41:02 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:02.470 14:41:02 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:02.470 14:41:02 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:02.470 Looking for driver=vfio-pci 00:04:02.470 14:41:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.470 14:41:02 -- setup/driver.sh@45 -- # setup output config 00:04:02.470 14:41:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.470 14:41:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.850 14:41:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.850 14:41:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.850 14:41:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.763 14:41:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.763 14:41:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.763 14:41:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.763 14:41:05 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:05.763 14:41:05 -- setup/driver.sh@65 -- # setup reset 00:04:05.763 14:41:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.763 14:41:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.341 00:04:08.341 real 0m5.761s 00:04:08.341 user 0m1.074s 00:04:08.341 sys 0m1.818s 00:04:08.341 14:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.341 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:08.341 ************************************ 00:04:08.341 END TEST guess_driver 00:04:08.341 ************************************ 00:04:08.341 00:04:08.341 real 0m8.318s 00:04:08.341 user 0m1.664s 00:04:08.341 sys 0m2.884s 00:04:08.342 14:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.342 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 ************************************ 00:04:08.342 END TEST driver 00:04:08.342 ************************************ 00:04:08.342 14:41:08 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:08.342 14:41:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.342 14:41:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.342 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 ************************************ 00:04:08.342 START TEST devices 00:04:08.342 ************************************ 00:04:08.342 14:41:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:08.342 * Looking for test storage... 00:04:08.342 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:08.342 14:41:08 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:08.342 14:41:08 -- setup/devices.sh@192 -- # setup reset 00:04:08.342 14:41:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.342 14:41:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.724 14:41:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:09.724 14:41:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:09.724 14:41:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:09.724 14:41:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:09.724 14:41:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:09.724 14:41:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:09.724 14:41:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:09.724 14:41:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.724 14:41:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:09.724 14:41:09 -- setup/devices.sh@196 -- # blocks=() 00:04:09.724 14:41:09 -- setup/devices.sh@196 -- # declare -a blocks 00:04:09.724 14:41:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:09.724 14:41:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:09.724 14:41:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:09.724 14:41:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:09.724 14:41:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:09.724 14:41:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:09.724 14:41:09 -- setup/devices.sh@202 -- # pci=0000:81:00.0 00:04:09.724 14:41:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\1\:\0\0\.\0* ]] 00:04:09.724 14:41:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:09.724 14:41:09 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:09.724 14:41:09 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:09.983 No valid GPT data, bailing 00:04:09.983 14:41:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:09.983 14:41:09 -- scripts/common.sh@391 -- # pt= 00:04:09.983 14:41:09 -- scripts/common.sh@392 -- # return 1 00:04:09.983 14:41:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:09.983 14:41:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:09.983 14:41:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:09.983 14:41:09 -- setup/common.sh@80 -- # echo 2000398934016 00:04:09.983 14:41:09 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:09.983 14:41:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:09.983 14:41:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:81:00.0 00:04:09.983 14:41:09 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:09.983 14:41:09 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:09.983 14:41:09 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:09.983 14:41:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.983 14:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.983 14:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:09.983 ************************************ 00:04:09.983 START TEST nvme_mount 00:04:09.983 ************************************ 00:04:09.983 14:41:09 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:09.983 14:41:09 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:09.983 14:41:09 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:09.983 14:41:09 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.983 14:41:09 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.983 14:41:09 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:09.983 14:41:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.983 14:41:09 -- setup/common.sh@40 -- # local part_no=1 00:04:09.983 14:41:09 -- setup/common.sh@41 -- # local size=1073741824 00:04:09.983 14:41:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.983 14:41:09 -- setup/common.sh@44 -- # parts=() 00:04:09.983 14:41:09 -- setup/common.sh@44 -- # local parts 00:04:09.983 14:41:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.983 14:41:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.983 14:41:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.983 14:41:09 -- setup/common.sh@46 -- # (( part++ )) 00:04:09.983 14:41:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.983 14:41:09 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:09.983 14:41:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.984 14:41:09 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:10.925 Creating new GPT entries in memory. 00:04:10.925 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.925 other utilities. 00:04:10.925 14:41:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.925 14:41:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.925 14:41:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.925 14:41:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.925 14:41:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:12.309 Creating new GPT entries in memory. 00:04:12.309 The operation has completed successfully. 00:04:12.309 14:41:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:12.309 14:41:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.309 14:41:11 -- setup/common.sh@62 -- # wait 88885 00:04:12.309 14:41:11 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.309 14:41:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:12.309 14:41:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.309 14:41:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.309 14:41:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.309 14:41:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.309 14:41:12 -- setup/devices.sh@105 -- # verify 0000:81:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.309 14:41:12 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:12.309 14:41:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.309 14:41:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.309 14:41:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.309 14:41:12 -- setup/devices.sh@53 -- # local found=0 00:04:12.309 14:41:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.309 14:41:12 -- setup/devices.sh@56 -- # : 00:04:12.309 14:41:12 -- setup/devices.sh@59 -- # local pci status 00:04:12.309 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.309 14:41:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:12.309 14:41:12 -- setup/devices.sh@47 -- # setup output config 00:04:12.309 14:41:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.309 14:41:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:13.251 14:41:12 -- setup/devices.sh@63 -- # found=1 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:13.251 14:41:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.251 14:41:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.251 14:41:13 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.251 14:41:13 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.251 14:41:13 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.251 14:41:13 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.251 14:41:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:13.251 14:41:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.251 14:41:13 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.251 14:41:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.251 14:41:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.251 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.252 14:41:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.252 14:41:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.511 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:13.511 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:13.511 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.511 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.511 14:41:13 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:13.511 14:41:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:13.511 14:41:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.511 14:41:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:13.511 14:41:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:13.511 14:41:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.511 14:41:13 -- setup/devices.sh@116 -- # verify 0000:81:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.511 14:41:13 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:13.511 14:41:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:13.511 14:41:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.511 14:41:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.511 14:41:13 -- setup/devices.sh@53 -- # local found=0 00:04:13.511 14:41:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.511 14:41:13 -- setup/devices.sh@56 -- # : 00:04:13.511 14:41:13 -- setup/devices.sh@59 -- # local pci status 00:04:13.511 14:41:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.511 14:41:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:13.511 14:41:13 -- setup/devices.sh@47 -- # setup output config 00:04:13.511 14:41:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.511 14:41:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:14.890 14:41:14 -- setup/devices.sh@63 -- # found=1 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.890 14:41:14 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:14.890 14:41:14 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.890 14:41:14 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.890 14:41:14 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.890 14:41:14 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.890 14:41:14 -- setup/devices.sh@125 -- # verify 0000:81:00.0 data@nvme0n1 '' '' 00:04:14.890 14:41:14 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:14.890 14:41:14 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:14.890 14:41:14 -- setup/devices.sh@50 -- # local mount_point= 00:04:14.890 14:41:14 -- setup/devices.sh@51 -- # local test_file= 00:04:14.890 14:41:14 -- setup/devices.sh@53 -- # local found=0 00:04:14.890 14:41:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.890 14:41:14 -- setup/devices.sh@59 -- # local pci status 00:04:14.890 14:41:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.890 14:41:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:14.890 14:41:14 -- setup/devices.sh@47 -- # setup output config 00:04:14.890 14:41:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.890 14:41:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:16.269 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:16.270 14:41:16 -- setup/devices.sh@63 -- # found=1 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.270 14:41:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:16.270 14:41:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:16.270 14:41:16 -- setup/devices.sh@68 -- # return 0 00:04:16.270 14:41:16 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:16.270 14:41:16 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.270 14:41:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:16.270 14:41:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.270 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:16.270 00:04:16.270 real 0m6.285s 00:04:16.270 user 0m1.472s 00:04:16.270 sys 0m2.420s 00:04:16.270 14:41:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.270 14:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:16.270 ************************************ 00:04:16.270 END TEST nvme_mount 00:04:16.270 ************************************ 00:04:16.270 14:41:16 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:16.270 14:41:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.270 14:41:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.270 14:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:16.270 ************************************ 00:04:16.270 START TEST dm_mount 00:04:16.270 ************************************ 00:04:16.270 14:41:16 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:16.270 14:41:16 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:16.270 14:41:16 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:16.270 14:41:16 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:16.270 14:41:16 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:16.270 14:41:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.270 14:41:16 -- setup/common.sh@40 -- # local part_no=2 00:04:16.270 14:41:16 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.270 14:41:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.270 14:41:16 -- setup/common.sh@44 -- # parts=() 00:04:16.270 14:41:16 -- setup/common.sh@44 -- # local parts 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.270 14:41:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.270 14:41:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.270 14:41:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.270 14:41:16 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.270 14:41:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.270 14:41:16 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:17.653 Creating new GPT entries in memory. 00:04:17.653 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.653 other utilities. 00:04:17.653 14:41:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.653 14:41:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.653 14:41:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.653 14:41:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.653 14:41:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.596 Creating new GPT entries in memory. 00:04:18.596 The operation has completed successfully. 00:04:18.596 14:41:18 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.596 14:41:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.596 14:41:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.596 14:41:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.596 14:41:18 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:19.536 The operation has completed successfully. 00:04:19.536 14:41:19 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.536 14:41:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.536 14:41:19 -- setup/common.sh@62 -- # wait 91277 00:04:19.536 14:41:19 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:19.536 14:41:19 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.536 14:41:19 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.536 14:41:19 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:19.536 14:41:19 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:19.536 14:41:19 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.536 14:41:19 -- setup/devices.sh@161 -- # break 00:04:19.536 14:41:19 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.536 14:41:19 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:19.536 14:41:19 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:19.536 14:41:19 -- setup/devices.sh@166 -- # dm=dm-0 00:04:19.536 14:41:19 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:19.536 14:41:19 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:19.536 14:41:19 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.536 14:41:19 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:19.536 14:41:19 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.536 14:41:19 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.536 14:41:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:19.536 14:41:19 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.536 14:41:19 -- setup/devices.sh@174 -- # verify 0000:81:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.536 14:41:19 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:19.536 14:41:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:19.536 14:41:19 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.536 14:41:19 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.536 14:41:19 -- setup/devices.sh@53 -- # local found=0 00:04:19.536 14:41:19 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.536 14:41:19 -- setup/devices.sh@56 -- # : 00:04:19.536 14:41:19 -- setup/devices.sh@59 -- # local pci status 00:04:19.536 14:41:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.536 14:41:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:19.536 14:41:19 -- setup/devices.sh@47 -- # setup output config 00:04:19.536 14:41:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.536 14:41:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:20.918 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.918 14:41:20 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:20.918 14:41:20 -- setup/devices.sh@63 -- # found=1 00:04:20.918 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.918 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.919 14:41:20 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:20.919 14:41:20 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:20.919 14:41:20 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.919 14:41:20 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.919 14:41:20 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:20.919 14:41:20 -- setup/devices.sh@184 -- # verify 0000:81:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:20.919 14:41:20 -- setup/devices.sh@48 -- # local dev=0000:81:00.0 00:04:20.919 14:41:20 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:20.919 14:41:20 -- setup/devices.sh@50 -- # local mount_point= 00:04:20.919 14:41:20 -- setup/devices.sh@51 -- # local test_file= 00:04:20.919 14:41:20 -- setup/devices.sh@53 -- # local found=0 00:04:20.919 14:41:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:20.919 14:41:20 -- setup/devices.sh@59 -- # local pci status 00:04:20.919 14:41:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.919 14:41:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:81:00.0 00:04:20.919 14:41:20 -- setup/devices.sh@47 -- # setup output config 00:04:20.919 14:41:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.919 14:41:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:81:00.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:21.857 14:41:21 -- setup/devices.sh@63 -- # found=1 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.857 14:41:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\1\:\0\0\.\0 ]] 00:04:21.857 14:41:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.137 14:41:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.137 14:41:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.137 14:41:22 -- setup/devices.sh@68 -- # return 0 00:04:22.137 14:41:22 -- setup/devices.sh@187 -- # cleanup_dm 00:04:22.137 14:41:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:22.138 14:41:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.138 14:41:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:22.138 14:41:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.138 14:41:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:22.138 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.138 14:41:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.138 14:41:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:22.138 00:04:22.138 real 0m5.727s 00:04:22.138 user 0m0.994s 00:04:22.138 sys 0m1.617s 00:04:22.138 14:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.138 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:22.138 ************************************ 00:04:22.138 END TEST dm_mount 00:04:22.138 ************************************ 00:04:22.138 14:41:22 -- setup/devices.sh@1 -- # cleanup 00:04:22.138 14:41:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:22.138 14:41:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.138 14:41:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.138 14:41:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.138 14:41:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.138 14:41:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.398 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.398 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.398 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.398 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.398 14:41:22 -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.398 14:41:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:22.398 14:41:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.398 14:41:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.398 14:41:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.398 14:41:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.398 14:41:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.398 00:04:22.398 real 0m14.123s 00:04:22.398 user 0m3.206s 00:04:22.398 sys 0m5.154s 00:04:22.398 14:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.399 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:22.399 ************************************ 00:04:22.399 END TEST devices 00:04:22.399 ************************************ 00:04:22.399 00:04:22.399 real 0m47.479s 00:04:22.399 user 0m12.999s 00:04:22.399 sys 0m19.859s 00:04:22.399 14:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.399 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:22.399 ************************************ 00:04:22.399 END TEST setup.sh 00:04:22.399 ************************************ 00:04:22.399 14:41:22 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:23.777 Hugepages 00:04:23.777 node hugesize free / total 00:04:23.777 node0 1048576kB 0 / 0 00:04:23.777 node0 2048kB 2048 / 2048 00:04:23.777 node1 1048576kB 0 / 0 00:04:23.777 node1 2048kB 0 / 0 00:04:23.777 00:04:23.777 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.777 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:23.777 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:23.777 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:23.777 NVMe 0000:81:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:23.777 14:41:23 -- spdk/autotest.sh@130 -- # uname -s 00:04:23.777 14:41:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:23.777 14:41:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:23.777 14:41:23 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:25.159 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:25.159 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:25.159 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:27.073 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.073 14:41:27 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:28.011 14:41:28 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:28.011 14:41:28 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:28.011 14:41:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.011 14:41:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:28.011 14:41:28 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:28.011 14:41:28 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:28.011 14:41:28 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.011 14:41:28 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.011 14:41:28 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:28.011 14:41:28 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:28.011 14:41:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:81:00.0 00:04:28.011 14:41:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.392 Waiting for block devices as requested 00:04:29.392 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:04:29.392 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:29.392 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:29.392 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:29.652 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:29.652 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:29.652 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:29.914 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:29.914 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:29.914 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:29.914 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:30.175 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:30.175 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:30.175 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:30.175 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:30.435 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:30.435 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:30.435 14:41:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.435 14:41:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:81:00.0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1488 -- # grep 0000:81:00.0/nvme/nvme 00:04:30.435 14:41:30 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 ]] 00:04:30.435 14:41:30 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/nvme/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:30.435 14:41:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.435 14:41:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.435 14:41:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:30.435 14:41:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.435 14:41:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.435 14:41:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:30.435 14:41:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.435 14:41:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.435 14:41:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.435 14:41:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.435 14:41:30 -- common/autotest_common.sh@1543 -- # continue 00:04:30.435 14:41:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:30.435 14:41:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.435 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:04:30.693 14:41:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:30.693 14:41:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:30.693 14:41:30 -- common/autotest_common.sh@10 -- # set +x 00:04:30.693 14:41:30 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:31.631 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.631 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.631 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.540 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.798 14:41:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:33.798 14:41:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.798 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:33.798 14:41:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:33.798 14:41:33 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:33.798 14:41:33 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.798 14:41:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:33.798 14:41:33 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:33.798 14:41:33 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:33.798 14:41:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:33.798 14:41:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:33.798 14:41:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.798 14:41:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.798 14:41:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:33.798 14:41:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:33.798 14:41:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:81:00.0 00:04:33.798 14:41:33 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:33.798 14:41:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:81:00.0/device 00:04:33.798 14:41:33 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:33.798 14:41:33 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:33.798 14:41:33 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:33.798 14:41:33 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:81:00.0 00:04:33.798 14:41:33 -- common/autotest_common.sh@1578 -- # [[ -z 0000:81:00.0 ]] 00:04:33.798 14:41:33 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=96599 00:04:33.798 14:41:33 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.798 14:41:33 -- common/autotest_common.sh@1584 -- # waitforlisten 96599 00:04:33.798 14:41:33 -- common/autotest_common.sh@817 -- # '[' -z 96599 ']' 00:04:33.798 14:41:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.798 14:41:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:33.798 14:41:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.798 14:41:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:33.798 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:34.056 [2024-04-26 14:41:33.977418] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:34.056 [2024-04-26 14:41:33.977545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96599 ] 00:04:34.056 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.056 [2024-04-26 14:41:34.096593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.314 [2024-04-26 14:41:34.305739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.251 14:41:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.251 14:41:35 -- common/autotest_common.sh@850 -- # return 0 00:04:35.251 14:41:35 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:35.251 14:41:35 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:35.251 14:41:35 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:81:00.0 00:04:38.537 nvme0n1 00:04:38.537 14:41:38 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:38.537 [2024-04-26 14:41:38.358918] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:38.537 request: 00:04:38.537 { 00:04:38.537 "nvme_ctrlr_name": "nvme0", 00:04:38.537 "password": "test", 00:04:38.537 "method": "bdev_nvme_opal_revert", 00:04:38.537 "req_id": 1 00:04:38.537 } 00:04:38.537 Got JSON-RPC error response 00:04:38.537 response: 00:04:38.537 { 00:04:38.537 "code": -32602, 00:04:38.537 "message": "Invalid parameters" 00:04:38.537 } 00:04:38.537 14:41:38 -- common/autotest_common.sh@1590 -- # true 00:04:38.537 14:41:38 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:38.537 14:41:38 -- common/autotest_common.sh@1594 -- # killprocess 96599 00:04:38.537 14:41:38 -- common/autotest_common.sh@936 -- # '[' -z 96599 ']' 00:04:38.537 14:41:38 -- common/autotest_common.sh@940 -- # kill -0 96599 00:04:38.537 14:41:38 -- common/autotest_common.sh@941 -- # uname 00:04:38.537 14:41:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.537 14:41:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96599 00:04:38.537 14:41:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.537 14:41:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.537 14:41:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96599' 00:04:38.537 killing process with pid 96599 00:04:38.537 14:41:38 -- common/autotest_common.sh@955 -- # kill 96599 00:04:38.537 14:41:38 -- common/autotest_common.sh@960 -- # wait 96599 00:04:42.734 14:41:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:42.734 14:41:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:42.734 14:41:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.734 14:41:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:42.734 14:41:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:42.734 14:41:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.734 14:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.734 14:41:42 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:42.734 14:41:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.734 14:41:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.734 14:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.734 ************************************ 00:04:42.734 START TEST env 00:04:42.734 ************************************ 00:04:42.734 14:41:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:42.734 * Looking for test storage... 00:04:42.734 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:42.734 14:41:42 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.734 14:41:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.734 14:41:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.734 14:41:42 -- common/autotest_common.sh@10 -- # set +x 00:04:42.994 ************************************ 00:04:42.994 START TEST env_memory 00:04:42.994 ************************************ 00:04:42.994 14:41:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:42.994 00:04:42.994 00:04:42.994 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.994 http://cunit.sourceforge.net/ 00:04:42.994 00:04:42.994 00:04:42.994 Suite: memory 00:04:42.994 Test: alloc and free memory map ...[2024-04-26 14:41:42.907678] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.994 passed 00:04:42.994 Test: mem map translation ...[2024-04-26 14:41:42.949189] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.994 [2024-04-26 14:41:42.949231] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.994 [2024-04-26 14:41:42.949300] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.994 [2024-04-26 14:41:42.949329] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.994 passed 00:04:42.994 Test: mem map registration ...[2024-04-26 14:41:43.013876] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:42.994 [2024-04-26 14:41:43.013914] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:42.994 passed 00:04:43.255 Test: mem map adjacent registrations ...passed 00:04:43.255 00:04:43.255 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.255 suites 1 1 n/a 0 0 00:04:43.255 tests 4 4 4 0 0 00:04:43.255 asserts 152 152 152 0 n/a 00:04:43.255 00:04:43.255 Elapsed time = 0.230 seconds 00:04:43.255 00:04:43.255 real 0m0.249s 00:04:43.255 user 0m0.223s 00:04:43.255 sys 0m0.025s 00:04:43.255 14:41:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.255 14:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.255 ************************************ 00:04:43.255 END TEST env_memory 00:04:43.255 ************************************ 00:04:43.255 14:41:43 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.255 14:41:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.255 14:41:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.255 14:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:43.256 ************************************ 00:04:43.256 START TEST env_vtophys 00:04:43.256 ************************************ 00:04:43.256 14:41:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.256 EAL: lib.eal log level changed from notice to debug 00:04:43.256 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.256 EAL: Detected lcore 1 as core 1 on socket 0 00:04:43.256 EAL: Detected lcore 2 as core 2 on socket 0 00:04:43.256 EAL: Detected lcore 3 as core 3 on socket 0 00:04:43.256 EAL: Detected lcore 4 as core 4 on socket 0 00:04:43.256 EAL: Detected lcore 5 as core 5 on socket 0 00:04:43.256 EAL: Detected lcore 6 as core 8 on socket 0 00:04:43.256 EAL: Detected lcore 7 as core 9 on socket 0 00:04:43.256 EAL: Detected lcore 8 as core 10 on socket 0 00:04:43.256 EAL: Detected lcore 9 as core 11 on socket 0 00:04:43.256 EAL: Detected lcore 10 as core 12 on socket 0 00:04:43.256 EAL: Detected lcore 11 as core 13 on socket 0 00:04:43.256 EAL: Detected lcore 12 as core 0 on socket 1 00:04:43.256 EAL: Detected lcore 13 as core 1 on socket 1 00:04:43.256 EAL: Detected lcore 14 as core 2 on socket 1 00:04:43.256 EAL: Detected lcore 15 as core 3 on socket 1 00:04:43.256 EAL: Detected lcore 16 as core 4 on socket 1 00:04:43.256 EAL: Detected lcore 17 as core 5 on socket 1 00:04:43.256 EAL: Detected lcore 18 as core 8 on socket 1 00:04:43.256 EAL: Detected lcore 19 as core 9 on socket 1 00:04:43.256 EAL: Detected lcore 20 as core 10 on socket 1 00:04:43.256 EAL: Detected lcore 21 as core 11 on socket 1 00:04:43.256 EAL: Detected lcore 22 as core 12 on socket 1 00:04:43.256 EAL: Detected lcore 23 as core 13 on socket 1 00:04:43.256 EAL: Detected lcore 24 as core 0 on socket 0 00:04:43.256 EAL: Detected lcore 25 as core 1 on socket 0 00:04:43.256 EAL: Detected lcore 26 as core 2 on socket 0 00:04:43.256 EAL: Detected lcore 27 as core 3 on socket 0 00:04:43.256 EAL: Detected lcore 28 as core 4 on socket 0 00:04:43.256 EAL: Detected lcore 29 as core 5 on socket 0 00:04:43.256 EAL: Detected lcore 30 as core 8 on socket 0 00:04:43.256 EAL: Detected lcore 31 as core 9 on socket 0 00:04:43.256 EAL: Detected lcore 32 as core 10 on socket 0 00:04:43.256 EAL: Detected lcore 33 as core 11 on socket 0 00:04:43.256 EAL: Detected lcore 34 as core 12 on socket 0 00:04:43.256 EAL: Detected lcore 35 as core 13 on socket 0 00:04:43.256 EAL: Detected lcore 36 as core 0 on socket 1 00:04:43.256 EAL: Detected lcore 37 as core 1 on socket 1 00:04:43.256 EAL: Detected lcore 38 as core 2 on socket 1 00:04:43.256 EAL: Detected lcore 39 as core 3 on socket 1 00:04:43.256 EAL: Detected lcore 40 as core 4 on socket 1 00:04:43.256 EAL: Detected lcore 41 as core 5 on socket 1 00:04:43.256 EAL: Detected lcore 42 as core 8 on socket 1 00:04:43.256 EAL: Detected lcore 43 as core 9 on socket 1 00:04:43.256 EAL: Detected lcore 44 as core 10 on socket 1 00:04:43.256 EAL: Detected lcore 45 as core 11 on socket 1 00:04:43.256 EAL: Detected lcore 46 as core 12 on socket 1 00:04:43.256 EAL: Detected lcore 47 as core 13 on socket 1 00:04:43.256 EAL: Maximum logical cores by configuration: 128 00:04:43.256 EAL: Detected CPU lcores: 48 00:04:43.256 EAL: Detected NUMA nodes: 2 00:04:43.256 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:43.256 EAL: Detected shared linkage of DPDK 00:04:43.256 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.516 EAL: Bus pci wants IOVA as 'DC' 00:04:43.516 EAL: Buses did not request a specific IOVA mode. 00:04:43.516 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:43.516 EAL: Selected IOVA mode 'VA' 00:04:43.516 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.516 EAL: Probing VFIO support... 00:04:43.516 EAL: IOMMU type 1 (Type 1) is supported 00:04:43.516 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:43.516 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:43.516 EAL: VFIO support initialized 00:04:43.516 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.516 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.516 EAL: Setting up physically contiguous memory... 00:04:43.516 EAL: Setting maximum number of open files to 524288 00:04:43.516 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.516 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:43.516 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.516 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:43.516 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.516 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:43.516 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.516 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.516 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:43.516 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:43.516 EAL: Hugepages will be freed exactly as allocated. 00:04:43.516 EAL: No shared files mode enabled, IPC is disabled 00:04:43.516 EAL: No shared files mode enabled, IPC is disabled 00:04:43.516 EAL: TSC frequency is ~2700000 KHz 00:04:43.516 EAL: Main lcore 0 is ready (tid=7f599f6faa40;cpuset=[0]) 00:04:43.516 EAL: Trying to obtain current memory policy. 00:04:43.516 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.516 EAL: Restoring previous memory policy: 0 00:04:43.516 EAL: request: mp_malloc_sync 00:04:43.516 EAL: No shared files mode enabled, IPC is disabled 00:04:43.516 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.516 EAL: No shared files mode enabled, IPC is disabled 00:04:43.516 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.516 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.516 00:04:43.516 00:04:43.516 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.516 http://cunit.sourceforge.net/ 00:04:43.516 00:04:43.516 00:04:43.516 Suite: components_suite 00:04:43.775 Test: vtophys_malloc_test ...passed 00:04:43.775 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.775 EAL: Restoring previous memory policy: 4 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.775 EAL: Trying to obtain current memory policy. 00:04:43.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.775 EAL: Restoring previous memory policy: 4 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.775 EAL: Trying to obtain current memory policy. 00:04:43.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.775 EAL: Restoring previous memory policy: 4 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.775 EAL: Trying to obtain current memory policy. 00:04:43.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.775 EAL: Restoring previous memory policy: 4 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.775 EAL: Trying to obtain current memory policy. 00:04:43.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.775 EAL: Restoring previous memory policy: 4 00:04:43.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.775 EAL: request: mp_malloc_sync 00:04:43.775 EAL: No shared files mode enabled, IPC is disabled 00:04:43.775 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.033 EAL: request: mp_malloc_sync 00:04:44.033 EAL: No shared files mode enabled, IPC is disabled 00:04:44.033 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.033 EAL: Trying to obtain current memory policy. 00:04:44.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.033 EAL: Restoring previous memory policy: 4 00:04:44.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.033 EAL: request: mp_malloc_sync 00:04:44.033 EAL: No shared files mode enabled, IPC is disabled 00:04:44.033 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.033 EAL: request: mp_malloc_sync 00:04:44.033 EAL: No shared files mode enabled, IPC is disabled 00:04:44.033 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.292 EAL: Trying to obtain current memory policy. 00:04:44.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.292 EAL: Restoring previous memory policy: 4 00:04:44.292 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.292 EAL: request: mp_malloc_sync 00:04:44.292 EAL: No shared files mode enabled, IPC is disabled 00:04:44.292 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.292 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.550 EAL: request: mp_malloc_sync 00:04:44.550 EAL: No shared files mode enabled, IPC is disabled 00:04:44.550 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.550 EAL: Trying to obtain current memory policy. 00:04:44.550 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.550 EAL: Restoring previous memory policy: 4 00:04:44.550 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.550 EAL: request: mp_malloc_sync 00:04:44.550 EAL: No shared files mode enabled, IPC is disabled 00:04:44.550 EAL: Heap on socket 0 was expanded by 258MB 00:04:45.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.117 EAL: request: mp_malloc_sync 00:04:45.117 EAL: No shared files mode enabled, IPC is disabled 00:04:45.117 EAL: Heap on socket 0 was shrunk by 258MB 00:04:45.376 EAL: Trying to obtain current memory policy. 00:04:45.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.634 EAL: Restoring previous memory policy: 4 00:04:45.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.635 EAL: request: mp_malloc_sync 00:04:45.635 EAL: No shared files mode enabled, IPC is disabled 00:04:45.635 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.569 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.569 EAL: request: mp_malloc_sync 00:04:46.569 EAL: No shared files mode enabled, IPC is disabled 00:04:46.569 EAL: Heap on socket 0 was shrunk by 514MB 00:04:47.135 EAL: Trying to obtain current memory policy. 00:04:47.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.394 EAL: Restoring previous memory policy: 4 00:04:47.394 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.394 EAL: request: mp_malloc_sync 00:04:47.394 EAL: No shared files mode enabled, IPC is disabled 00:04:47.394 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.030 EAL: request: mp_malloc_sync 00:04:49.030 EAL: No shared files mode enabled, IPC is disabled 00:04:49.030 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.405 passed 00:04:50.405 00:04:50.406 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.406 suites 1 1 n/a 0 0 00:04:50.406 tests 2 2 2 0 0 00:04:50.406 asserts 497 497 497 0 n/a 00:04:50.406 00:04:50.406 Elapsed time = 6.932 seconds 00:04:50.406 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.406 EAL: request: mp_malloc_sync 00:04:50.406 EAL: No shared files mode enabled, IPC is disabled 00:04:50.406 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.406 EAL: No shared files mode enabled, IPC is disabled 00:04:50.406 EAL: No shared files mode enabled, IPC is disabled 00:04:50.406 EAL: No shared files mode enabled, IPC is disabled 00:04:50.406 00:04:50.406 real 0m7.183s 00:04:50.406 user 0m6.139s 00:04:50.406 sys 0m0.988s 00:04:50.406 14:41:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.406 14:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:50.406 ************************************ 00:04:50.406 END TEST env_vtophys 00:04:50.406 ************************************ 00:04:50.406 14:41:50 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.406 14:41:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.406 14:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.406 14:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:50.666 ************************************ 00:04:50.666 START TEST env_pci 00:04:50.666 ************************************ 00:04:50.666 14:41:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.666 00:04:50.666 00:04:50.666 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.666 http://cunit.sourceforge.net/ 00:04:50.666 00:04:50.666 00:04:50.666 Suite: pci 00:04:50.666 Test: pci_hook ...[2024-04-26 14:41:50.575391] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98685 has claimed it 00:04:50.666 EAL: Cannot find device (10000:00:01.0) 00:04:50.666 EAL: Failed to attach device on primary process 00:04:50.666 passed 00:04:50.666 00:04:50.666 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.666 suites 1 1 n/a 0 0 00:04:50.666 tests 1 1 1 0 0 00:04:50.666 asserts 25 25 25 0 n/a 00:04:50.666 00:04:50.666 Elapsed time = 0.042 seconds 00:04:50.666 00:04:50.666 real 0m0.094s 00:04:50.666 user 0m0.036s 00:04:50.666 sys 0m0.058s 00:04:50.666 14:41:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.666 14:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:50.666 ************************************ 00:04:50.666 END TEST env_pci 00:04:50.666 ************************************ 00:04:50.666 14:41:50 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.666 14:41:50 -- env/env.sh@15 -- # uname 00:04:50.666 14:41:50 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.666 14:41:50 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.666 14:41:50 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.666 14:41:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:50.666 14:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.666 14:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:50.926 ************************************ 00:04:50.926 START TEST env_dpdk_post_init 00:04:50.926 ************************************ 00:04:50.926 14:41:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.926 EAL: Detected CPU lcores: 48 00:04:50.927 EAL: Detected NUMA nodes: 2 00:04:50.927 EAL: Detected shared linkage of DPDK 00:04:50.927 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.927 EAL: Selected IOVA mode 'VA' 00:04:50.927 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.927 EAL: VFIO support initialized 00:04:50.927 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.927 EAL: Using IOMMU type 1 (Type 1) 00:04:50.927 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:50.927 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:51.186 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:52.124 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:81:00.0 (socket 1) 00:04:56.296 EAL: Releasing PCI mapped resource for 0000:81:00.0 00:04:56.296 EAL: Calling pci_unmap_resource for 0000:81:00.0 at 0x202001040000 00:04:56.296 Starting DPDK initialization... 00:04:56.296 Starting SPDK post initialization... 00:04:56.296 SPDK NVMe probe 00:04:56.296 Attaching to 0000:81:00.0 00:04:56.296 Attached to 0000:81:00.0 00:04:56.296 Cleaning up... 00:04:56.296 00:04:56.296 real 0m5.384s 00:04:56.296 user 0m4.112s 00:04:56.296 sys 0m0.329s 00:04:56.296 14:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.296 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.296 ************************************ 00:04:56.296 END TEST env_dpdk_post_init 00:04:56.296 ************************************ 00:04:56.296 14:41:56 -- env/env.sh@26 -- # uname 00:04:56.296 14:41:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.296 14:41:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.296 14:41:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.296 14:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.296 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.296 ************************************ 00:04:56.296 START TEST env_mem_callbacks 00:04:56.296 ************************************ 00:04:56.296 14:41:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.296 EAL: Detected CPU lcores: 48 00:04:56.296 EAL: Detected NUMA nodes: 2 00:04:56.296 EAL: Detected shared linkage of DPDK 00:04:56.296 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.296 EAL: Selected IOVA mode 'VA' 00:04:56.296 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.555 EAL: VFIO support initialized 00:04:56.555 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.555 00:04:56.555 00:04:56.555 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.555 http://cunit.sourceforge.net/ 00:04:56.555 00:04:56.555 00:04:56.555 Suite: memory 00:04:56.555 Test: test ... 00:04:56.555 register 0x200000200000 2097152 00:04:56.555 malloc 3145728 00:04:56.555 register 0x200000400000 4194304 00:04:56.555 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.555 malloc 64 00:04:56.555 buf 0x2000004ffec0 len 64 PASSED 00:04:56.555 malloc 4194304 00:04:56.555 register 0x200000800000 6291456 00:04:56.555 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.555 free 0x2000004fffc0 3145728 00:04:56.555 free 0x2000004ffec0 64 00:04:56.555 unregister 0x200000400000 4194304 PASSED 00:04:56.555 free 0x2000009fffc0 4194304 00:04:56.555 unregister 0x200000800000 6291456 PASSED 00:04:56.555 malloc 8388608 00:04:56.555 register 0x200000400000 10485760 00:04:56.555 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.555 free 0x2000005fffc0 8388608 00:04:56.555 unregister 0x200000400000 10485760 PASSED 00:04:56.555 passed 00:04:56.555 00:04:56.555 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.555 suites 1 1 n/a 0 0 00:04:56.555 tests 1 1 1 0 0 00:04:56.555 asserts 15 15 15 0 n/a 00:04:56.556 00:04:56.556 Elapsed time = 0.049 seconds 00:04:56.556 00:04:56.556 real 0m0.166s 00:04:56.556 user 0m0.081s 00:04:56.556 sys 0m0.084s 00:04:56.556 14:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.556 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.556 ************************************ 00:04:56.556 END TEST env_mem_callbacks 00:04:56.556 ************************************ 00:04:56.556 00:04:56.556 real 0m13.770s 00:04:56.556 user 0m10.844s 00:04:56.556 sys 0m1.876s 00:04:56.556 14:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.556 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.556 ************************************ 00:04:56.556 END TEST env 00:04:56.556 ************************************ 00:04:56.556 14:41:56 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.556 14:41:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.556 14:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.556 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.556 ************************************ 00:04:56.556 START TEST rpc 00:04:56.556 ************************************ 00:04:56.556 14:41:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:56.814 * Looking for test storage... 00:04:56.814 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:56.814 14:41:56 -- rpc/rpc.sh@65 -- # spdk_pid=99521 00:04:56.814 14:41:56 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:56.814 14:41:56 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.814 14:41:56 -- rpc/rpc.sh@67 -- # waitforlisten 99521 00:04:56.814 14:41:56 -- common/autotest_common.sh@817 -- # '[' -z 99521 ']' 00:04:56.814 14:41:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.814 14:41:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.814 14:41:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.814 14:41:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.814 14:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.814 [2024-04-26 14:41:56.737019] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:56.814 [2024-04-26 14:41:56.737171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99521 ] 00:04:56.814 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.814 [2024-04-26 14:41:56.856184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.071 [2024-04-26 14:41:57.074172] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.071 [2024-04-26 14:41:57.074267] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99521' to capture a snapshot of events at runtime. 00:04:57.071 [2024-04-26 14:41:57.074306] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.071 [2024-04-26 14:41:57.074324] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.071 [2024-04-26 14:41:57.074343] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99521 for offline analysis/debug. 00:04:57.071 [2024-04-26 14:41:57.074394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.007 14:41:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.007 14:41:57 -- common/autotest_common.sh@850 -- # return 0 00:04:58.007 14:41:57 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:58.007 14:41:57 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:58.007 14:41:57 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.008 14:41:57 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.008 14:41:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.008 14:41:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.008 14:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 ************************************ 00:04:58.008 START TEST rpc_integrity 00:04:58.008 ************************************ 00:04:58.008 14:41:57 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:58.008 14:41:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.008 14:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 14:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.008 14:41:57 -- rpc/rpc.sh@13 -- # jq length 00:04:58.008 14:41:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.008 14:41:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.008 14:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 14:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.008 14:41:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.008 14:41:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 14:41:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.008 { 00:04:58.008 "name": "Malloc0", 00:04:58.008 "aliases": [ 00:04:58.008 "c2ebd694-9290-491e-9d55-16ef49bd4f21" 00:04:58.008 ], 00:04:58.008 "product_name": "Malloc disk", 00:04:58.008 "block_size": 512, 00:04:58.008 "num_blocks": 16384, 00:04:58.008 "uuid": "c2ebd694-9290-491e-9d55-16ef49bd4f21", 00:04:58.008 "assigned_rate_limits": { 00:04:58.008 "rw_ios_per_sec": 0, 00:04:58.008 "rw_mbytes_per_sec": 0, 00:04:58.008 "r_mbytes_per_sec": 0, 00:04:58.008 "w_mbytes_per_sec": 0 00:04:58.008 }, 00:04:58.008 "claimed": false, 00:04:58.008 "zoned": false, 00:04:58.008 "supported_io_types": { 00:04:58.008 "read": true, 00:04:58.008 "write": true, 00:04:58.008 "unmap": true, 00:04:58.008 "write_zeroes": true, 00:04:58.008 "flush": true, 00:04:58.008 "reset": true, 00:04:58.008 "compare": false, 00:04:58.008 "compare_and_write": false, 00:04:58.008 "abort": true, 00:04:58.008 "nvme_admin": false, 00:04:58.008 "nvme_io": false 00:04:58.008 }, 00:04:58.008 "memory_domains": [ 00:04:58.008 { 00:04:58.008 "dma_device_id": "system", 00:04:58.008 "dma_device_type": 1 00:04:58.008 }, 00:04:58.008 { 00:04:58.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.008 "dma_device_type": 2 00:04:58.008 } 00:04:58.008 ], 00:04:58.008 "driver_specific": {} 00:04:58.008 } 00:04:58.008 ]' 00:04:58.008 14:41:57 -- rpc/rpc.sh@17 -- # jq length 00:04:58.008 14:41:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.008 14:41:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.008 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 [2024-04-26 14:41:58.010516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.008 [2024-04-26 14:41:58.010584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.008 [2024-04-26 14:41:58.010631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022580 00:04:58.008 [2024-04-26 14:41:58.010652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.008 [2024-04-26 14:41:58.012985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.008 [2024-04-26 14:41:58.013014] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.008 Passthru0 00:04:58.008 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.008 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.008 { 00:04:58.008 "name": "Malloc0", 00:04:58.008 "aliases": [ 00:04:58.008 "c2ebd694-9290-491e-9d55-16ef49bd4f21" 00:04:58.008 ], 00:04:58.008 "product_name": "Malloc disk", 00:04:58.008 "block_size": 512, 00:04:58.008 "num_blocks": 16384, 00:04:58.008 "uuid": "c2ebd694-9290-491e-9d55-16ef49bd4f21", 00:04:58.008 "assigned_rate_limits": { 00:04:58.008 "rw_ios_per_sec": 0, 00:04:58.008 "rw_mbytes_per_sec": 0, 00:04:58.008 "r_mbytes_per_sec": 0, 00:04:58.008 "w_mbytes_per_sec": 0 00:04:58.008 }, 00:04:58.008 "claimed": true, 00:04:58.008 "claim_type": "exclusive_write", 00:04:58.008 "zoned": false, 00:04:58.008 "supported_io_types": { 00:04:58.008 "read": true, 00:04:58.008 "write": true, 00:04:58.008 "unmap": true, 00:04:58.008 "write_zeroes": true, 00:04:58.008 "flush": true, 00:04:58.008 "reset": true, 00:04:58.008 "compare": false, 00:04:58.008 "compare_and_write": false, 00:04:58.008 "abort": true, 00:04:58.008 "nvme_admin": false, 00:04:58.008 "nvme_io": false 00:04:58.008 }, 00:04:58.008 "memory_domains": [ 00:04:58.008 { 00:04:58.008 "dma_device_id": "system", 00:04:58.008 "dma_device_type": 1 00:04:58.008 }, 00:04:58.008 { 00:04:58.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.008 "dma_device_type": 2 00:04:58.008 } 00:04:58.008 ], 00:04:58.008 "driver_specific": {} 00:04:58.008 }, 00:04:58.008 { 00:04:58.008 "name": "Passthru0", 00:04:58.008 "aliases": [ 00:04:58.008 "ac07f0f8-7782-5cbd-a9d5-c5f4b06b3591" 00:04:58.008 ], 00:04:58.008 "product_name": "passthru", 00:04:58.008 "block_size": 512, 00:04:58.008 "num_blocks": 16384, 00:04:58.008 "uuid": "ac07f0f8-7782-5cbd-a9d5-c5f4b06b3591", 00:04:58.008 "assigned_rate_limits": { 00:04:58.008 "rw_ios_per_sec": 0, 00:04:58.008 "rw_mbytes_per_sec": 0, 00:04:58.008 "r_mbytes_per_sec": 0, 00:04:58.008 "w_mbytes_per_sec": 0 00:04:58.008 }, 00:04:58.008 "claimed": false, 00:04:58.008 "zoned": false, 00:04:58.008 "supported_io_types": { 00:04:58.008 "read": true, 00:04:58.008 "write": true, 00:04:58.008 "unmap": true, 00:04:58.008 "write_zeroes": true, 00:04:58.008 "flush": true, 00:04:58.008 "reset": true, 00:04:58.008 "compare": false, 00:04:58.008 "compare_and_write": false, 00:04:58.008 "abort": true, 00:04:58.008 "nvme_admin": false, 00:04:58.008 "nvme_io": false 00:04:58.008 }, 00:04:58.008 "memory_domains": [ 00:04:58.008 { 00:04:58.008 "dma_device_id": "system", 00:04:58.008 "dma_device_type": 1 00:04:58.008 }, 00:04:58.008 { 00:04:58.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.008 "dma_device_type": 2 00:04:58.008 } 00:04:58.008 ], 00:04:58.008 "driver_specific": { 00:04:58.008 "passthru": { 00:04:58.008 "name": "Passthru0", 00:04:58.008 "base_bdev_name": "Malloc0" 00:04:58.008 } 00:04:58.008 } 00:04:58.008 } 00:04:58.008 ]' 00:04:58.008 14:41:58 -- rpc/rpc.sh@21 -- # jq length 00:04:58.008 14:41:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.008 14:41:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.008 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.008 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.008 14:41:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.008 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.008 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.267 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.267 14:41:58 -- rpc/rpc.sh@26 -- # jq length 00:04:58.267 14:41:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.267 00:04:58.267 real 0m0.240s 00:04:58.267 user 0m0.133s 00:04:58.267 sys 0m0.025s 00:04:58.267 14:41:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 ************************************ 00:04:58.267 END TEST rpc_integrity 00:04:58.267 ************************************ 00:04:58.267 14:41:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.267 14:41:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.267 14:41:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 ************************************ 00:04:58.267 START TEST rpc_plugins 00:04:58.267 ************************************ 00:04:58.267 14:41:58 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:58.267 14:41:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.267 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.267 14:41:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.267 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.267 { 00:04:58.267 "name": "Malloc1", 00:04:58.267 "aliases": [ 00:04:58.267 "45422125-67cd-430e-8ab0-3e9e1e8393a6" 00:04:58.267 ], 00:04:58.267 "product_name": "Malloc disk", 00:04:58.267 "block_size": 4096, 00:04:58.267 "num_blocks": 256, 00:04:58.267 "uuid": "45422125-67cd-430e-8ab0-3e9e1e8393a6", 00:04:58.267 "assigned_rate_limits": { 00:04:58.267 "rw_ios_per_sec": 0, 00:04:58.267 "rw_mbytes_per_sec": 0, 00:04:58.267 "r_mbytes_per_sec": 0, 00:04:58.267 "w_mbytes_per_sec": 0 00:04:58.267 }, 00:04:58.267 "claimed": false, 00:04:58.267 "zoned": false, 00:04:58.267 "supported_io_types": { 00:04:58.267 "read": true, 00:04:58.267 "write": true, 00:04:58.267 "unmap": true, 00:04:58.267 "write_zeroes": true, 00:04:58.267 "flush": true, 00:04:58.267 "reset": true, 00:04:58.267 "compare": false, 00:04:58.267 "compare_and_write": false, 00:04:58.267 "abort": true, 00:04:58.267 "nvme_admin": false, 00:04:58.267 "nvme_io": false 00:04:58.267 }, 00:04:58.267 "memory_domains": [ 00:04:58.267 { 00:04:58.267 "dma_device_id": "system", 00:04:58.267 "dma_device_type": 1 00:04:58.267 }, 00:04:58.267 { 00:04:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.267 "dma_device_type": 2 00:04:58.267 } 00:04:58.267 ], 00:04:58.267 "driver_specific": {} 00:04:58.267 } 00:04:58.267 ]' 00:04:58.267 14:41:58 -- rpc/rpc.sh@32 -- # jq length 00:04:58.267 14:41:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.267 14:41:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.267 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:58.267 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.267 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.267 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.267 14:41:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:58.267 14:41:58 -- rpc/rpc.sh@36 -- # jq length 00:04:58.525 14:41:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:58.525 00:04:58.525 real 0m0.108s 00:04:58.525 user 0m0.070s 00:04:58.525 sys 0m0.007s 00:04:58.525 14:41:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.525 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.525 ************************************ 00:04:58.525 END TEST rpc_plugins 00:04:58.525 ************************************ 00:04:58.525 14:41:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:58.525 14:41:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.525 14:41:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.525 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.525 ************************************ 00:04:58.525 START TEST rpc_trace_cmd_test 00:04:58.525 ************************************ 00:04:58.525 14:41:58 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:58.525 14:41:58 -- rpc/rpc.sh@40 -- # local info 00:04:58.525 14:41:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:58.525 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.525 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.525 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.525 14:41:58 -- rpc/rpc.sh@42 -- # info='{ 00:04:58.525 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99521", 00:04:58.525 "tpoint_group_mask": "0x8", 00:04:58.525 "iscsi_conn": { 00:04:58.525 "mask": "0x2", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "scsi": { 00:04:58.525 "mask": "0x4", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "bdev": { 00:04:58.525 "mask": "0x8", 00:04:58.525 "tpoint_mask": "0xffffffffffffffff" 00:04:58.525 }, 00:04:58.525 "nvmf_rdma": { 00:04:58.525 "mask": "0x10", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "nvmf_tcp": { 00:04:58.525 "mask": "0x20", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "ftl": { 00:04:58.525 "mask": "0x40", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "blobfs": { 00:04:58.525 "mask": "0x80", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "dsa": { 00:04:58.525 "mask": "0x200", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "thread": { 00:04:58.525 "mask": "0x400", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "nvme_pcie": { 00:04:58.525 "mask": "0x800", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "iaa": { 00:04:58.525 "mask": "0x1000", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "nvme_tcp": { 00:04:58.525 "mask": "0x2000", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "bdev_nvme": { 00:04:58.525 "mask": "0x4000", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 }, 00:04:58.525 "sock": { 00:04:58.525 "mask": "0x8000", 00:04:58.525 "tpoint_mask": "0x0" 00:04:58.525 } 00:04:58.525 }' 00:04:58.525 14:41:58 -- rpc/rpc.sh@43 -- # jq length 00:04:58.525 14:41:58 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:58.525 14:41:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:58.525 14:41:58 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:58.525 14:41:58 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.525 14:41:58 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.525 14:41:58 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.784 14:41:58 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.784 14:41:58 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.784 14:41:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.784 00:04:58.784 real 0m0.182s 00:04:58.784 user 0m0.163s 00:04:58.784 sys 0m0.011s 00:04:58.784 14:41:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.784 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.784 ************************************ 00:04:58.784 END TEST rpc_trace_cmd_test 00:04:58.784 ************************************ 00:04:58.784 14:41:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.784 14:41:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.784 14:41:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.784 14:41:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.784 14:41:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.784 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.784 ************************************ 00:04:58.784 START TEST rpc_daemon_integrity 00:04:58.784 ************************************ 00:04:58.784 14:41:58 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:58.784 14:41:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.784 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.784 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.784 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.784 14:41:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.784 14:41:58 -- rpc/rpc.sh@13 -- # jq length 00:04:58.784 14:41:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.784 14:41:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.784 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.784 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.784 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.784 14:41:58 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.784 14:41:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.784 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.784 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.784 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.784 14:41:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.784 { 00:04:58.784 "name": "Malloc2", 00:04:58.784 "aliases": [ 00:04:58.784 "8f271023-f2ac-412f-a17d-b1c4abc7fe89" 00:04:58.784 ], 00:04:58.784 "product_name": "Malloc disk", 00:04:58.784 "block_size": 512, 00:04:58.784 "num_blocks": 16384, 00:04:58.784 "uuid": "8f271023-f2ac-412f-a17d-b1c4abc7fe89", 00:04:58.784 "assigned_rate_limits": { 00:04:58.784 "rw_ios_per_sec": 0, 00:04:58.784 "rw_mbytes_per_sec": 0, 00:04:58.784 "r_mbytes_per_sec": 0, 00:04:58.784 "w_mbytes_per_sec": 0 00:04:58.784 }, 00:04:58.784 "claimed": false, 00:04:58.784 "zoned": false, 00:04:58.784 "supported_io_types": { 00:04:58.784 "read": true, 00:04:58.784 "write": true, 00:04:58.784 "unmap": true, 00:04:58.784 "write_zeroes": true, 00:04:58.784 "flush": true, 00:04:58.784 "reset": true, 00:04:58.784 "compare": false, 00:04:58.784 "compare_and_write": false, 00:04:58.784 "abort": true, 00:04:58.784 "nvme_admin": false, 00:04:58.784 "nvme_io": false 00:04:58.784 }, 00:04:58.784 "memory_domains": [ 00:04:58.784 { 00:04:58.784 "dma_device_id": "system", 00:04:58.784 "dma_device_type": 1 00:04:58.784 }, 00:04:58.784 { 00:04:58.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.784 "dma_device_type": 2 00:04:58.784 } 00:04:58.784 ], 00:04:58.784 "driver_specific": {} 00:04:58.784 } 00:04:58.784 ]' 00:04:58.784 14:41:58 -- rpc/rpc.sh@17 -- # jq length 00:04:59.042 14:41:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.042 14:41:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.042 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.042 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.042 [2024-04-26 14:41:58.906985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.042 [2024-04-26 14:41:58.907056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.042 [2024-04-26 14:41:58.907097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023780 00:04:59.042 [2024-04-26 14:41:58.907143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.042 [2024-04-26 14:41:58.909464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.042 [2024-04-26 14:41:58.909500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.042 Passthru0 00:04:59.042 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.042 14:41:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.042 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.042 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.042 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.042 14:41:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.042 { 00:04:59.042 "name": "Malloc2", 00:04:59.042 "aliases": [ 00:04:59.042 "8f271023-f2ac-412f-a17d-b1c4abc7fe89" 00:04:59.042 ], 00:04:59.042 "product_name": "Malloc disk", 00:04:59.042 "block_size": 512, 00:04:59.042 "num_blocks": 16384, 00:04:59.042 "uuid": "8f271023-f2ac-412f-a17d-b1c4abc7fe89", 00:04:59.042 "assigned_rate_limits": { 00:04:59.042 "rw_ios_per_sec": 0, 00:04:59.042 "rw_mbytes_per_sec": 0, 00:04:59.042 "r_mbytes_per_sec": 0, 00:04:59.042 "w_mbytes_per_sec": 0 00:04:59.042 }, 00:04:59.042 "claimed": true, 00:04:59.042 "claim_type": "exclusive_write", 00:04:59.042 "zoned": false, 00:04:59.042 "supported_io_types": { 00:04:59.042 "read": true, 00:04:59.042 "write": true, 00:04:59.042 "unmap": true, 00:04:59.042 "write_zeroes": true, 00:04:59.042 "flush": true, 00:04:59.042 "reset": true, 00:04:59.042 "compare": false, 00:04:59.042 "compare_and_write": false, 00:04:59.042 "abort": true, 00:04:59.042 "nvme_admin": false, 00:04:59.042 "nvme_io": false 00:04:59.042 }, 00:04:59.043 "memory_domains": [ 00:04:59.043 { 00:04:59.043 "dma_device_id": "system", 00:04:59.043 "dma_device_type": 1 00:04:59.043 }, 00:04:59.043 { 00:04:59.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.043 "dma_device_type": 2 00:04:59.043 } 00:04:59.043 ], 00:04:59.043 "driver_specific": {} 00:04:59.043 }, 00:04:59.043 { 00:04:59.043 "name": "Passthru0", 00:04:59.043 "aliases": [ 00:04:59.043 "5cf0e607-d12f-5e75-aaaf-4f13ba99863c" 00:04:59.043 ], 00:04:59.043 "product_name": "passthru", 00:04:59.043 "block_size": 512, 00:04:59.043 "num_blocks": 16384, 00:04:59.043 "uuid": "5cf0e607-d12f-5e75-aaaf-4f13ba99863c", 00:04:59.043 "assigned_rate_limits": { 00:04:59.043 "rw_ios_per_sec": 0, 00:04:59.043 "rw_mbytes_per_sec": 0, 00:04:59.043 "r_mbytes_per_sec": 0, 00:04:59.043 "w_mbytes_per_sec": 0 00:04:59.043 }, 00:04:59.043 "claimed": false, 00:04:59.043 "zoned": false, 00:04:59.043 "supported_io_types": { 00:04:59.043 "read": true, 00:04:59.043 "write": true, 00:04:59.043 "unmap": true, 00:04:59.043 "write_zeroes": true, 00:04:59.043 "flush": true, 00:04:59.043 "reset": true, 00:04:59.043 "compare": false, 00:04:59.043 "compare_and_write": false, 00:04:59.043 "abort": true, 00:04:59.043 "nvme_admin": false, 00:04:59.043 "nvme_io": false 00:04:59.043 }, 00:04:59.043 "memory_domains": [ 00:04:59.043 { 00:04:59.043 "dma_device_id": "system", 00:04:59.043 "dma_device_type": 1 00:04:59.043 }, 00:04:59.043 { 00:04:59.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.043 "dma_device_type": 2 00:04:59.043 } 00:04:59.043 ], 00:04:59.043 "driver_specific": { 00:04:59.043 "passthru": { 00:04:59.043 "name": "Passthru0", 00:04:59.043 "base_bdev_name": "Malloc2" 00:04:59.043 } 00:04:59.043 } 00:04:59.043 } 00:04:59.043 ]' 00:04:59.043 14:41:58 -- rpc/rpc.sh@21 -- # jq length 00:04:59.043 14:41:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.043 14:41:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.043 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.043 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.043 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.043 14:41:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.043 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.043 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.043 14:41:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.043 14:41:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.043 14:41:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.043 14:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.043 14:41:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.043 14:41:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.043 14:41:59 -- rpc/rpc.sh@26 -- # jq length 00:04:59.043 14:41:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.043 00:04:59.043 real 0m0.247s 00:04:59.043 user 0m0.142s 00:04:59.043 sys 0m0.019s 00:04:59.043 14:41:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.043 14:41:59 -- common/autotest_common.sh@10 -- # set +x 00:04:59.043 ************************************ 00:04:59.043 END TEST rpc_daemon_integrity 00:04:59.043 ************************************ 00:04:59.043 14:41:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.043 14:41:59 -- rpc/rpc.sh@84 -- # killprocess 99521 00:04:59.043 14:41:59 -- common/autotest_common.sh@936 -- # '[' -z 99521 ']' 00:04:59.043 14:41:59 -- common/autotest_common.sh@940 -- # kill -0 99521 00:04:59.043 14:41:59 -- common/autotest_common.sh@941 -- # uname 00:04:59.043 14:41:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.043 14:41:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99521 00:04:59.043 14:41:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.043 14:41:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.043 14:41:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99521' 00:04:59.043 killing process with pid 99521 00:04:59.043 14:41:59 -- common/autotest_common.sh@955 -- # kill 99521 00:04:59.043 14:41:59 -- common/autotest_common.sh@960 -- # wait 99521 00:05:01.622 00:05:01.622 real 0m4.562s 00:05:01.622 user 0m5.131s 00:05:01.622 sys 0m0.909s 00:05:01.622 14:42:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.622 14:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.622 ************************************ 00:05:01.622 END TEST rpc 00:05:01.622 ************************************ 00:05:01.622 14:42:01 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.622 14:42:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.622 14:42:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.622 14:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.622 ************************************ 00:05:01.622 START TEST skip_rpc 00:05:01.622 ************************************ 00:05:01.622 14:42:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.622 * Looking for test storage... 00:05:01.622 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:01.622 14:42:01 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:01.622 14:42:01 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:01.622 14:42:01 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.622 14:42:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.622 14:42:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.622 14:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.622 ************************************ 00:05:01.622 START TEST skip_rpc 00:05:01.622 ************************************ 00:05:01.622 14:42:01 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:01.623 14:42:01 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=100367 00:05:01.623 14:42:01 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:01.623 14:42:01 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.623 14:42:01 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:01.623 [2024-04-26 14:42:01.513430] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:01.623 [2024-04-26 14:42:01.513567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100367 ] 00:05:01.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.623 [2024-04-26 14:42:01.637097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.880 [2024-04-26 14:42:01.860822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.141 14:42:06 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.141 14:42:06 -- common/autotest_common.sh@638 -- # local es=0 00:05:07.141 14:42:06 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.141 14:42:06 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:07.141 14:42:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.141 14:42:06 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:07.141 14:42:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.141 14:42:06 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:07.141 14:42:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.141 14:42:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.141 14:42:06 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:07.141 14:42:06 -- common/autotest_common.sh@641 -- # es=1 00:05:07.141 14:42:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:07.141 14:42:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:07.141 14:42:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:07.141 14:42:06 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.141 14:42:06 -- rpc/skip_rpc.sh@23 -- # killprocess 100367 00:05:07.141 14:42:06 -- common/autotest_common.sh@936 -- # '[' -z 100367 ']' 00:05:07.141 14:42:06 -- common/autotest_common.sh@940 -- # kill -0 100367 00:05:07.141 14:42:06 -- common/autotest_common.sh@941 -- # uname 00:05:07.141 14:42:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.141 14:42:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100367 00:05:07.141 14:42:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.141 14:42:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.141 14:42:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100367' 00:05:07.141 killing process with pid 100367 00:05:07.141 14:42:06 -- common/autotest_common.sh@955 -- # kill 100367 00:05:07.141 14:42:06 -- common/autotest_common.sh@960 -- # wait 100367 00:05:08.518 00:05:08.518 real 0m7.142s 00:05:08.518 user 0m6.658s 00:05:08.518 sys 0m0.478s 00:05:08.518 14:42:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.518 14:42:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.518 ************************************ 00:05:08.518 END TEST skip_rpc 00:05:08.518 ************************************ 00:05:08.518 14:42:08 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.518 14:42:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.518 14:42:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.518 14:42:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.776 ************************************ 00:05:08.776 START TEST skip_rpc_with_json 00:05:08.776 ************************************ 00:05:08.776 14:42:08 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:08.776 14:42:08 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.776 14:42:08 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=101739 00:05:08.777 14:42:08 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.777 14:42:08 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.777 14:42:08 -- rpc/skip_rpc.sh@31 -- # waitforlisten 101739 00:05:08.777 14:42:08 -- common/autotest_common.sh@817 -- # '[' -z 101739 ']' 00:05:08.777 14:42:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.777 14:42:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.777 14:42:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.777 14:42:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.777 14:42:08 -- common/autotest_common.sh@10 -- # set +x 00:05:08.777 [2024-04-26 14:42:08.781709] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:08.777 [2024-04-26 14:42:08.781846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101739 ] 00:05:08.777 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.035 [2024-04-26 14:42:08.906134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.293 [2024-04-26 14:42:09.124742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.859 14:42:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.859 14:42:09 -- common/autotest_common.sh@850 -- # return 0 00:05:09.859 14:42:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.859 14:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:09.859 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:09.859 [2024-04-26 14:42:09.860668] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.859 request: 00:05:09.860 { 00:05:09.860 "trtype": "tcp", 00:05:09.860 "method": "nvmf_get_transports", 00:05:09.860 "req_id": 1 00:05:09.860 } 00:05:09.860 Got JSON-RPC error response 00:05:09.860 response: 00:05:09.860 { 00:05:09.860 "code": -19, 00:05:09.860 "message": "No such device" 00:05:09.860 } 00:05:09.860 14:42:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:09.860 14:42:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.860 14:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:09.860 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:09.860 [2024-04-26 14:42:09.868841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.860 14:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:09.860 14:42:09 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.860 14:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:09.860 14:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.119 14:42:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:10.119 14:42:10 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:10.119 { 00:05:10.119 "subsystems": [ 00:05:10.119 { 00:05:10.119 "subsystem": "keyring", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "iobuf", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "iobuf_set_options", 00:05:10.119 "params": { 00:05:10.119 "small_pool_count": 8192, 00:05:10.119 "large_pool_count": 1024, 00:05:10.119 "small_bufsize": 8192, 00:05:10.119 "large_bufsize": 135168 00:05:10.119 } 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "sock", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "sock_impl_set_options", 00:05:10.119 "params": { 00:05:10.119 "impl_name": "posix", 00:05:10.119 "recv_buf_size": 2097152, 00:05:10.119 "send_buf_size": 2097152, 00:05:10.119 "enable_recv_pipe": true, 00:05:10.119 "enable_quickack": false, 00:05:10.119 "enable_placement_id": 0, 00:05:10.119 "enable_zerocopy_send_server": true, 00:05:10.119 "enable_zerocopy_send_client": false, 00:05:10.119 "zerocopy_threshold": 0, 00:05:10.119 "tls_version": 0, 00:05:10.119 "enable_ktls": false 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "sock_impl_set_options", 00:05:10.119 "params": { 00:05:10.119 "impl_name": "ssl", 00:05:10.119 "recv_buf_size": 4096, 00:05:10.119 "send_buf_size": 4096, 00:05:10.119 "enable_recv_pipe": true, 00:05:10.119 "enable_quickack": false, 00:05:10.119 "enable_placement_id": 0, 00:05:10.119 "enable_zerocopy_send_server": true, 00:05:10.119 "enable_zerocopy_send_client": false, 00:05:10.119 "zerocopy_threshold": 0, 00:05:10.119 "tls_version": 0, 00:05:10.119 "enable_ktls": false 00:05:10.119 } 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "vmd", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "accel", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "accel_set_options", 00:05:10.119 "params": { 00:05:10.119 "small_cache_size": 128, 00:05:10.119 "large_cache_size": 16, 00:05:10.119 "task_count": 2048, 00:05:10.119 "sequence_count": 2048, 00:05:10.119 "buf_count": 2048 00:05:10.119 } 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "bdev", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "bdev_set_options", 00:05:10.119 "params": { 00:05:10.119 "bdev_io_pool_size": 65535, 00:05:10.119 "bdev_io_cache_size": 256, 00:05:10.119 "bdev_auto_examine": true, 00:05:10.119 "iobuf_small_cache_size": 128, 00:05:10.119 "iobuf_large_cache_size": 16 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "bdev_raid_set_options", 00:05:10.119 "params": { 00:05:10.119 "process_window_size_kb": 1024 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "bdev_iscsi_set_options", 00:05:10.119 "params": { 00:05:10.119 "timeout_sec": 30 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "bdev_nvme_set_options", 00:05:10.119 "params": { 00:05:10.119 "action_on_timeout": "none", 00:05:10.119 "timeout_us": 0, 00:05:10.119 "timeout_admin_us": 0, 00:05:10.119 "keep_alive_timeout_ms": 10000, 00:05:10.119 "arbitration_burst": 0, 00:05:10.119 "low_priority_weight": 0, 00:05:10.119 "medium_priority_weight": 0, 00:05:10.119 "high_priority_weight": 0, 00:05:10.119 "nvme_adminq_poll_period_us": 10000, 00:05:10.119 "nvme_ioq_poll_period_us": 0, 00:05:10.119 "io_queue_requests": 0, 00:05:10.119 "delay_cmd_submit": true, 00:05:10.119 "transport_retry_count": 4, 00:05:10.119 "bdev_retry_count": 3, 00:05:10.119 "transport_ack_timeout": 0, 00:05:10.119 "ctrlr_loss_timeout_sec": 0, 00:05:10.119 "reconnect_delay_sec": 0, 00:05:10.119 "fast_io_fail_timeout_sec": 0, 00:05:10.119 "disable_auto_failback": false, 00:05:10.119 "generate_uuids": false, 00:05:10.119 "transport_tos": 0, 00:05:10.119 "nvme_error_stat": false, 00:05:10.119 "rdma_srq_size": 0, 00:05:10.119 "io_path_stat": false, 00:05:10.119 "allow_accel_sequence": false, 00:05:10.119 "rdma_max_cq_size": 0, 00:05:10.119 "rdma_cm_event_timeout_ms": 0, 00:05:10.119 "dhchap_digests": [ 00:05:10.119 "sha256", 00:05:10.119 "sha384", 00:05:10.119 "sha512" 00:05:10.119 ], 00:05:10.119 "dhchap_dhgroups": [ 00:05:10.119 "null", 00:05:10.119 "ffdhe2048", 00:05:10.119 "ffdhe3072", 00:05:10.119 "ffdhe4096", 00:05:10.119 "ffdhe6144", 00:05:10.119 "ffdhe8192" 00:05:10.119 ] 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "bdev_nvme_set_hotplug", 00:05:10.119 "params": { 00:05:10.119 "period_us": 100000, 00:05:10.119 "enable": false 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "bdev_wait_for_examine" 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "scsi", 00:05:10.119 "config": null 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "scheduler", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "framework_set_scheduler", 00:05:10.119 "params": { 00:05:10.119 "name": "static" 00:05:10.119 } 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "vhost_scsi", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "vhost_blk", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "ublk", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "nbd", 00:05:10.119 "config": [] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "nvmf", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "nvmf_set_config", 00:05:10.119 "params": { 00:05:10.119 "discovery_filter": "match_any", 00:05:10.119 "admin_cmd_passthru": { 00:05:10.119 "identify_ctrlr": false 00:05:10.119 } 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "nvmf_set_max_subsystems", 00:05:10.119 "params": { 00:05:10.119 "max_subsystems": 1024 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "nvmf_set_crdt", 00:05:10.119 "params": { 00:05:10.119 "crdt1": 0, 00:05:10.119 "crdt2": 0, 00:05:10.119 "crdt3": 0 00:05:10.119 } 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "method": "nvmf_create_transport", 00:05:10.119 "params": { 00:05:10.119 "trtype": "TCP", 00:05:10.119 "max_queue_depth": 128, 00:05:10.119 "max_io_qpairs_per_ctrlr": 127, 00:05:10.119 "in_capsule_data_size": 4096, 00:05:10.119 "max_io_size": 131072, 00:05:10.119 "io_unit_size": 131072, 00:05:10.119 "max_aq_depth": 128, 00:05:10.119 "num_shared_buffers": 511, 00:05:10.119 "buf_cache_size": 4294967295, 00:05:10.119 "dif_insert_or_strip": false, 00:05:10.119 "zcopy": false, 00:05:10.119 "c2h_success": true, 00:05:10.119 "sock_priority": 0, 00:05:10.119 "abort_timeout_sec": 1, 00:05:10.119 "ack_timeout": 0, 00:05:10.119 "data_wr_pool_size": 0 00:05:10.119 } 00:05:10.119 } 00:05:10.119 ] 00:05:10.119 }, 00:05:10.119 { 00:05:10.119 "subsystem": "iscsi", 00:05:10.119 "config": [ 00:05:10.119 { 00:05:10.119 "method": "iscsi_set_options", 00:05:10.119 "params": { 00:05:10.120 "node_base": "iqn.2016-06.io.spdk", 00:05:10.120 "max_sessions": 128, 00:05:10.120 "max_connections_per_session": 2, 00:05:10.120 "max_queue_depth": 64, 00:05:10.120 "default_time2wait": 2, 00:05:10.120 "default_time2retain": 20, 00:05:10.120 "first_burst_length": 8192, 00:05:10.120 "immediate_data": true, 00:05:10.120 "allow_duplicated_isid": false, 00:05:10.120 "error_recovery_level": 0, 00:05:10.120 "nop_timeout": 60, 00:05:10.120 "nop_in_interval": 30, 00:05:10.120 "disable_chap": false, 00:05:10.120 "require_chap": false, 00:05:10.120 "mutual_chap": false, 00:05:10.120 "chap_group": 0, 00:05:10.120 "max_large_datain_per_connection": 64, 00:05:10.120 "max_r2t_per_connection": 4, 00:05:10.120 "pdu_pool_size": 36864, 00:05:10.120 "immediate_data_pool_size": 16384, 00:05:10.120 "data_out_pool_size": 2048 00:05:10.120 } 00:05:10.120 } 00:05:10.120 ] 00:05:10.120 } 00:05:10.120 ] 00:05:10.120 } 00:05:10.120 14:42:10 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:10.120 14:42:10 -- rpc/skip_rpc.sh@40 -- # killprocess 101739 00:05:10.120 14:42:10 -- common/autotest_common.sh@936 -- # '[' -z 101739 ']' 00:05:10.120 14:42:10 -- common/autotest_common.sh@940 -- # kill -0 101739 00:05:10.120 14:42:10 -- common/autotest_common.sh@941 -- # uname 00:05:10.120 14:42:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.120 14:42:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101739 00:05:10.120 14:42:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.120 14:42:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.120 14:42:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101739' 00:05:10.120 killing process with pid 101739 00:05:10.120 14:42:10 -- common/autotest_common.sh@955 -- # kill 101739 00:05:10.120 14:42:10 -- common/autotest_common.sh@960 -- # wait 101739 00:05:12.017 14:42:12 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=102211 00:05:12.018 14:42:12 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:12.018 14:42:12 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.281 14:42:17 -- rpc/skip_rpc.sh@50 -- # killprocess 102211 00:05:17.281 14:42:17 -- common/autotest_common.sh@936 -- # '[' -z 102211 ']' 00:05:17.281 14:42:17 -- common/autotest_common.sh@940 -- # kill -0 102211 00:05:17.281 14:42:17 -- common/autotest_common.sh@941 -- # uname 00:05:17.281 14:42:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.281 14:42:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102211 00:05:17.281 14:42:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.281 14:42:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.281 14:42:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102211' 00:05:17.281 killing process with pid 102211 00:05:17.281 14:42:17 -- common/autotest_common.sh@955 -- # kill 102211 00:05:17.281 14:42:17 -- common/autotest_common.sh@960 -- # wait 102211 00:05:19.180 14:42:19 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:19.180 14:42:19 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:19.180 00:05:19.180 real 0m10.428s 00:05:19.180 user 0m9.934s 00:05:19.180 sys 0m0.972s 00:05:19.180 14:42:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.180 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 ************************************ 00:05:19.180 END TEST skip_rpc_with_json 00:05:19.180 ************************************ 00:05:19.180 14:42:19 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:19.180 14:42:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.180 14:42:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.180 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 ************************************ 00:05:19.180 START TEST skip_rpc_with_delay 00:05:19.180 ************************************ 00:05:19.180 14:42:19 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:19.180 14:42:19 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.180 14:42:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:19.180 14:42:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.181 14:42:19 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.181 14:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.181 14:42:19 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.181 14:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.181 14:42:19 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.181 14:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.181 14:42:19 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.181 14:42:19 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.181 14:42:19 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.439 [2024-04-26 14:42:19.334797] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:19.439 [2024-04-26 14:42:19.334977] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:19.439 14:42:19 -- common/autotest_common.sh@641 -- # es=1 00:05:19.439 14:42:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:19.439 14:42:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:19.439 14:42:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:19.439 00:05:19.439 real 0m0.137s 00:05:19.439 user 0m0.069s 00:05:19.439 sys 0m0.067s 00:05:19.439 14:42:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.439 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 ************************************ 00:05:19.439 END TEST skip_rpc_with_delay 00:05:19.439 ************************************ 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.439 14:42:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.439 14:42:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.439 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 ************************************ 00:05:19.439 START TEST exit_on_failed_rpc_init 00:05:19.439 ************************************ 00:05:19.439 14:42:19 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=103128 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.439 14:42:19 -- rpc/skip_rpc.sh@63 -- # waitforlisten 103128 00:05:19.439 14:42:19 -- common/autotest_common.sh@817 -- # '[' -z 103128 ']' 00:05:19.439 14:42:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.439 14:42:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.439 14:42:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.439 14:42:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.439 14:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:19.697 [2024-04-26 14:42:19.600686] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:19.697 [2024-04-26 14:42:19.600829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103128 ] 00:05:19.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.698 [2024-04-26 14:42:19.722582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.956 [2024-04-26 14:42:19.930428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.892 14:42:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.892 14:42:20 -- common/autotest_common.sh@850 -- # return 0 00:05:20.892 14:42:20 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.892 14:42:20 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.892 14:42:20 -- common/autotest_common.sh@638 -- # local es=0 00:05:20.892 14:42:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.892 14:42:20 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:42:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:20.892 14:42:20 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:42:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:20.892 14:42:20 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:42:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:20.892 14:42:20 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:42:20 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:20.892 14:42:20 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.892 [2024-04-26 14:42:20.787363] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:20.892 [2024-04-26 14:42:20.787522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103272 ] 00:05:20.892 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.892 [2024-04-26 14:42:20.909615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.151 [2024-04-26 14:42:21.128856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.151 [2024-04-26 14:42:21.129011] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:21.151 [2024-04-26 14:42:21.129041] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:21.151 [2024-04-26 14:42:21.129059] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.727 14:42:21 -- common/autotest_common.sh@641 -- # es=234 00:05:21.727 14:42:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:21.727 14:42:21 -- common/autotest_common.sh@650 -- # es=106 00:05:21.727 14:42:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:21.727 14:42:21 -- common/autotest_common.sh@658 -- # es=1 00:05:21.727 14:42:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:21.727 14:42:21 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:21.727 14:42:21 -- rpc/skip_rpc.sh@70 -- # killprocess 103128 00:05:21.727 14:42:21 -- common/autotest_common.sh@936 -- # '[' -z 103128 ']' 00:05:21.727 14:42:21 -- common/autotest_common.sh@940 -- # kill -0 103128 00:05:21.727 14:42:21 -- common/autotest_common.sh@941 -- # uname 00:05:21.727 14:42:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.727 14:42:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103128 00:05:21.727 14:42:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.727 14:42:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.727 14:42:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103128' 00:05:21.727 killing process with pid 103128 00:05:21.728 14:42:21 -- common/autotest_common.sh@955 -- # kill 103128 00:05:21.728 14:42:21 -- common/autotest_common.sh@960 -- # wait 103128 00:05:23.628 00:05:23.628 real 0m4.048s 00:05:23.628 user 0m4.643s 00:05:23.628 sys 0m0.696s 00:05:23.628 14:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.628 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.628 ************************************ 00:05:23.628 END TEST exit_on_failed_rpc_init 00:05:23.628 ************************************ 00:05:23.628 14:42:23 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:23.628 00:05:23.629 real 0m22.294s 00:05:23.629 user 0m21.493s 00:05:23.629 sys 0m2.531s 00:05:23.629 14:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.629 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.629 ************************************ 00:05:23.629 END TEST skip_rpc 00:05:23.629 ************************************ 00:05:23.629 14:42:23 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.629 14:42:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.629 14:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.629 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.629 ************************************ 00:05:23.629 START TEST rpc_client 00:05:23.629 ************************************ 00:05:23.629 14:42:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.888 * Looking for test storage... 00:05:23.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:23.888 14:42:23 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:23.888 OK 00:05:23.888 14:42:23 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.888 00:05:23.888 real 0m0.097s 00:05:23.888 user 0m0.048s 00:05:23.888 sys 0m0.054s 00:05:23.888 14:42:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.888 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.888 ************************************ 00:05:23.888 END TEST rpc_client 00:05:23.888 ************************************ 00:05:23.888 14:42:23 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.888 14:42:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.888 14:42:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.888 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.888 ************************************ 00:05:23.888 START TEST json_config 00:05:23.888 ************************************ 00:05:23.888 14:42:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.888 14:42:23 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.888 14:42:23 -- nvmf/common.sh@7 -- # uname -s 00:05:23.888 14:42:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.888 14:42:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.888 14:42:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.888 14:42:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.888 14:42:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.888 14:42:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.888 14:42:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.888 14:42:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.888 14:42:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.888 14:42:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.888 14:42:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:23.888 14:42:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:23.888 14:42:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.888 14:42:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.888 14:42:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.888 14:42:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.888 14:42:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:23.888 14:42:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.888 14:42:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.888 14:42:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.888 14:42:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.888 14:42:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.888 14:42:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.888 14:42:23 -- paths/export.sh@5 -- # export PATH 00:05:23.888 14:42:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.888 14:42:23 -- nvmf/common.sh@47 -- # : 0 00:05:23.888 14:42:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.888 14:42:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.888 14:42:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.888 14:42:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.888 14:42:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.888 14:42:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.888 14:42:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.888 14:42:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.888 14:42:23 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:23.888 14:42:23 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.888 14:42:23 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.888 14:42:23 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.888 14:42:23 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.888 14:42:23 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.888 14:42:23 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.888 14:42:23 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.888 14:42:23 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.888 14:42:23 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.888 14:42:23 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.888 14:42:23 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:23.888 14:42:23 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.888 14:42:23 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.888 14:42:23 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.888 14:42:23 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:23.888 INFO: JSON configuration test init 00:05:23.888 14:42:23 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:23.888 14:42:23 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:23.888 14:42:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.888 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.888 14:42:23 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:23.888 14:42:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.888 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.148 14:42:23 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.148 14:42:23 -- json_config/common.sh@9 -- # local app=target 00:05:24.148 14:42:23 -- json_config/common.sh@10 -- # shift 00:05:24.148 14:42:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.148 14:42:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.148 14:42:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.148 14:42:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.148 14:42:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.148 14:42:23 -- json_config/common.sh@22 -- # app_pid["$app"]=103798 00:05:24.148 14:42:23 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.148 14:42:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.148 Waiting for target to run... 00:05:24.148 14:42:23 -- json_config/common.sh@25 -- # waitforlisten 103798 /var/tmp/spdk_tgt.sock 00:05:24.148 14:42:23 -- common/autotest_common.sh@817 -- # '[' -z 103798 ']' 00:05:24.148 14:42:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.148 14:42:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.148 14:42:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.148 14:42:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.148 14:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.148 [2024-04-26 14:42:24.054913] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:24.148 [2024-04-26 14:42:24.055060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103798 ] 00:05:24.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.716 [2024-04-26 14:42:24.638139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.973 [2024-04-26 14:42:24.836462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.973 14:42:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.973 14:42:24 -- common/autotest_common.sh@850 -- # return 0 00:05:24.973 14:42:24 -- json_config/common.sh@26 -- # echo '' 00:05:24.973 00:05:24.973 14:42:24 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:24.973 14:42:24 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:24.974 14:42:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:24.974 14:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.974 14:42:24 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:24.974 14:42:24 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:24.974 14:42:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:24.974 14:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.974 14:42:24 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.974 14:42:24 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:24.974 14:42:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.164 14:42:28 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:29.164 14:42:28 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:29.164 14:42:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.164 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.164 14:42:28 -- json_config/json_config.sh@45 -- # local ret=0 00:05:29.164 14:42:28 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.164 14:42:28 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:29.164 14:42:28 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:29.164 14:42:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:29.164 14:42:28 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:29.164 14:42:28 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:29.164 14:42:28 -- json_config/json_config.sh@48 -- # local get_types 00:05:29.164 14:42:28 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:29.164 14:42:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.164 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.164 14:42:28 -- json_config/json_config.sh@55 -- # return 0 00:05:29.164 14:42:28 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:29.164 14:42:28 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:29.164 14:42:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:29.164 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.164 14:42:28 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.164 14:42:28 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:29.164 14:42:28 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:29.164 14:42:28 -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:29.164 14:42:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:29.164 14:42:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.164 14:42:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:29.164 14:42:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:29.164 14:42:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:29.164 14:42:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.164 14:42:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:29.164 14:42:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.164 14:42:28 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:05:29.164 14:42:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:29.164 14:42:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:29.164 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 14:42:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:31.068 14:42:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:31.068 14:42:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:31.068 14:42:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:31.068 14:42:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:31.068 14:42:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:31.068 14:42:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:31.068 14:42:30 -- nvmf/common.sh@295 -- # net_devs=() 00:05:31.068 14:42:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:31.068 14:42:30 -- nvmf/common.sh@296 -- # e810=() 00:05:31.068 14:42:30 -- nvmf/common.sh@296 -- # local -ga e810 00:05:31.068 14:42:30 -- nvmf/common.sh@297 -- # x722=() 00:05:31.068 14:42:30 -- nvmf/common.sh@297 -- # local -ga x722 00:05:31.068 14:42:30 -- nvmf/common.sh@298 -- # mlx=() 00:05:31.068 14:42:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:31.068 14:42:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:31.068 14:42:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:05:31.068 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:05:31.068 14:42:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:31.068 14:42:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:05:31.068 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:05:31.068 14:42:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:31.068 14:42:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.068 14:42:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.068 14:42:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:05:31.068 Found net devices under 0000:09:00.0: mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.068 14:42:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.068 14:42:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:05:31.068 Found net devices under 0000:09:00.1: mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.068 14:42:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:31.068 14:42:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:31.068 14:42:30 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:31.068 14:42:30 -- nvmf/common.sh@58 -- # uname 00:05:31.068 14:42:30 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:31.068 14:42:30 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:31.068 14:42:30 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:31.068 14:42:30 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:31.068 14:42:30 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:31.068 14:42:30 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:31.068 14:42:30 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:31.068 14:42:30 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:31.068 14:42:30 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:31.068 14:42:30 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:31.068 14:42:30 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:31.068 14:42:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:31.068 14:42:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:31.068 14:42:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:31.068 14:42:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:31.068 14:42:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@105 -- # continue 2 00:05:31.068 14:42:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.068 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@105 -- # continue 2 00:05:31.068 14:42:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:31.068 14:42:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:31.068 14:42:30 -- nvmf/common.sh@74 -- # ip= 00:05:31.068 14:42:30 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:31.068 14:42:30 -- nvmf/common.sh@77 -- # ip link set mlx_0_0 up 00:05:31.068 14:42:30 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:31.068 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:31.068 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:05:31.068 altname enp9s0f0np0 00:05:31.068 inet 192.168.100.8/24 scope global mlx_0_0 00:05:31.068 valid_lft forever preferred_lft forever 00:05:31.068 14:42:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:31.068 14:42:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:31.068 14:42:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:31.068 14:42:30 -- nvmf/common.sh@74 -- # ip= 00:05:31.068 14:42:30 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:31.068 14:42:30 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:31.068 14:42:30 -- nvmf/common.sh@77 -- # ip link set mlx_0_1 up 00:05:31.068 14:42:30 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:31.068 14:42:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:31.068 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:31.068 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:05:31.068 altname enp9s0f1np1 00:05:31.068 inet 192.168.100.9/24 scope global mlx_0_1 00:05:31.068 valid_lft forever preferred_lft forever 00:05:31.069 14:42:30 -- nvmf/common.sh@411 -- # return 0 00:05:31.069 14:42:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:31.069 14:42:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:31.069 14:42:30 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:31.069 14:42:30 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:31.069 14:42:30 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:31.069 14:42:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:31.069 14:42:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:31.069 14:42:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:31.069 14:42:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:31.069 14:42:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:31.069 14:42:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:31.069 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.069 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:31.069 14:42:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:31.069 14:42:30 -- nvmf/common.sh@105 -- # continue 2 00:05:31.069 14:42:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:31.069 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.069 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:31.069 14:42:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:31.069 14:42:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:31.069 14:42:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:31.069 14:42:30 -- nvmf/common.sh@105 -- # continue 2 00:05:31.069 14:42:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:31.069 14:42:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:31.069 14:42:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:31.069 14:42:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:31.069 14:42:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:31.069 14:42:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:31.069 14:42:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:31.069 14:42:30 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:31.069 192.168.100.9' 00:05:31.069 14:42:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:31.069 192.168.100.9' 00:05:31.069 14:42:30 -- nvmf/common.sh@446 -- # head -n 1 00:05:31.069 14:42:30 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:31.069 14:42:30 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:31.069 192.168.100.9' 00:05:31.069 14:42:30 -- nvmf/common.sh@447 -- # tail -n +2 00:05:31.069 14:42:30 -- nvmf/common.sh@447 -- # head -n 1 00:05:31.069 14:42:30 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:31.069 14:42:30 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:31.069 14:42:30 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:31.069 14:42:30 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:31.069 14:42:30 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:31.069 14:42:30 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:31.069 14:42:30 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:31.069 14:42:30 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.069 14:42:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.327 MallocForNvmf0 00:05:31.327 14:42:31 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.327 14:42:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.586 MallocForNvmf1 00:05:31.586 14:42:31 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:31.586 14:42:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:31.586 [2024-04-26 14:42:31.663402] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:31.845 [2024-04-26 14:42:31.716923] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f654976a940) succeed. 00:05:31.845 [2024-04-26 14:42:31.729571] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f6549726940) succeed. 00:05:31.845 14:42:31 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.845 14:42:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.103 14:42:32 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.103 14:42:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.362 14:42:32 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.362 14:42:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.621 14:42:32 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:32.621 14:42:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:32.880 [2024-04-26 14:42:32.723899] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:32.880 14:42:32 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:32.880 14:42:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.880 14:42:32 -- common/autotest_common.sh@10 -- # set +x 00:05:32.880 14:42:32 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:32.880 14:42:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.880 14:42:32 -- common/autotest_common.sh@10 -- # set +x 00:05:32.880 14:42:32 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:32.880 14:42:32 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:32.880 14:42:32 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.138 MallocBdevForConfigChangeCheck 00:05:33.138 14:42:33 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:33.138 14:42:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:33.138 14:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.138 14:42:33 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:33.138 14:42:33 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.396 14:42:33 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:33.396 INFO: shutting down applications... 00:05:33.396 14:42:33 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:33.396 14:42:33 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:33.396 14:42:33 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:33.396 14:42:33 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.926 Calling clear_iscsi_subsystem 00:05:35.926 Calling clear_nvmf_subsystem 00:05:35.926 Calling clear_nbd_subsystem 00:05:35.926 Calling clear_ublk_subsystem 00:05:35.926 Calling clear_vhost_blk_subsystem 00:05:35.926 Calling clear_vhost_scsi_subsystem 00:05:35.926 Calling clear_bdev_subsystem 00:05:35.926 14:42:35 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.926 14:42:35 -- json_config/json_config.sh@343 -- # count=100 00:05:35.926 14:42:35 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.926 14:42:35 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.926 14:42:35 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.926 14:42:35 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.493 14:42:36 -- json_config/json_config.sh@345 -- # break 00:05:36.493 14:42:36 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.493 14:42:36 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.493 14:42:36 -- json_config/common.sh@31 -- # local app=target 00:05:36.493 14:42:36 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.493 14:42:36 -- json_config/common.sh@35 -- # [[ -n 103798 ]] 00:05:36.493 14:42:36 -- json_config/common.sh@38 -- # kill -SIGINT 103798 00:05:36.493 14:42:36 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.493 14:42:36 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.493 14:42:36 -- json_config/common.sh@41 -- # kill -0 103798 00:05:36.493 14:42:36 -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.751 [2024-04-26 14:42:36.797641] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:37.009 14:42:36 -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.009 14:42:36 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.009 14:42:36 -- json_config/common.sh@41 -- # kill -0 103798 00:05:37.009 14:42:36 -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.582 14:42:37 -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.582 14:42:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.582 14:42:37 -- json_config/common.sh@41 -- # kill -0 103798 00:05:37.582 14:42:37 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.582 14:42:37 -- json_config/common.sh@43 -- # break 00:05:37.582 14:42:37 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.582 14:42:37 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.582 SPDK target shutdown done 00:05:37.582 14:42:37 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:37.582 INFO: relaunching applications... 00:05:37.582 14:42:37 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.582 14:42:37 -- json_config/common.sh@9 -- # local app=target 00:05:37.582 14:42:37 -- json_config/common.sh@10 -- # shift 00:05:37.582 14:42:37 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.582 14:42:37 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.582 14:42:37 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.582 14:42:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.582 14:42:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.582 14:42:37 -- json_config/common.sh@22 -- # app_pid["$app"]=106861 00:05:37.582 14:42:37 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.582 14:42:37 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.582 Waiting for target to run... 00:05:37.582 14:42:37 -- json_config/common.sh@25 -- # waitforlisten 106861 /var/tmp/spdk_tgt.sock 00:05:37.582 14:42:37 -- common/autotest_common.sh@817 -- # '[' -z 106861 ']' 00:05:37.582 14:42:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.582 14:42:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.582 14:42:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.582 14:42:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.582 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 [2024-04-26 14:42:37.463841] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:37.582 [2024-04-26 14:42:37.463980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106861 ] 00:05:37.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.153 [2024-04-26 14:42:38.048793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.411 [2024-04-26 14:42:38.241169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.601 [2024-04-26 14:42:41.810804] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7fcca6548940) succeed. 00:05:42.601 [2024-04-26 14:42:41.823278] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7fcca6504940) succeed. 00:05:42.601 [2024-04-26 14:42:41.890946] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:42.601 14:42:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.601 14:42:42 -- common/autotest_common.sh@850 -- # return 0 00:05:42.601 14:42:42 -- json_config/common.sh@26 -- # echo '' 00:05:42.601 00:05:42.601 14:42:42 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:42.601 14:42:42 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:42.601 INFO: Checking if target configuration is the same... 00:05:42.602 14:42:42 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.602 14:42:42 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:42.602 14:42:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.602 + '[' 2 -ne 2 ']' 00:05:42.602 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.602 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:42.602 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:42.602 +++ basename /dev/fd/62 00:05:42.602 ++ mktemp /tmp/62.XXX 00:05:42.602 + tmp_file_1=/tmp/62.umF 00:05:42.602 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.602 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.602 + tmp_file_2=/tmp/spdk_tgt_config.json.L4w 00:05:42.602 + ret=0 00:05:42.602 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.860 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.119 + diff -u /tmp/62.umF /tmp/spdk_tgt_config.json.L4w 00:05:43.119 + echo 'INFO: JSON config files are the same' 00:05:43.119 INFO: JSON config files are the same 00:05:43.119 + rm /tmp/62.umF /tmp/spdk_tgt_config.json.L4w 00:05:43.119 + exit 0 00:05:43.119 14:42:42 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:43.119 14:42:42 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:43.119 INFO: changing configuration and checking if this can be detected... 00:05:43.119 14:42:42 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.119 14:42:42 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.119 14:42:43 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.119 14:42:43 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:43.119 14:42:43 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.375 + '[' 2 -ne 2 ']' 00:05:43.375 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.375 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:43.375 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:43.375 +++ basename /dev/fd/62 00:05:43.375 ++ mktemp /tmp/62.XXX 00:05:43.375 + tmp_file_1=/tmp/62.Lyy 00:05:43.375 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.375 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.375 + tmp_file_2=/tmp/spdk_tgt_config.json.ZxG 00:05:43.375 + ret=0 00:05:43.375 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.633 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.633 + diff -u /tmp/62.Lyy /tmp/spdk_tgt_config.json.ZxG 00:05:43.633 + ret=1 00:05:43.633 + echo '=== Start of file: /tmp/62.Lyy ===' 00:05:43.633 + cat /tmp/62.Lyy 00:05:43.633 + echo '=== End of file: /tmp/62.Lyy ===' 00:05:43.633 + echo '' 00:05:43.633 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZxG ===' 00:05:43.633 + cat /tmp/spdk_tgt_config.json.ZxG 00:05:43.633 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZxG ===' 00:05:43.633 + echo '' 00:05:43.633 + rm /tmp/62.Lyy /tmp/spdk_tgt_config.json.ZxG 00:05:43.633 + exit 1 00:05:43.633 14:42:43 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:43.633 INFO: configuration change detected. 00:05:43.633 14:42:43 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:43.633 14:42:43 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:43.633 14:42:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.633 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.633 14:42:43 -- json_config/json_config.sh@307 -- # local ret=0 00:05:43.633 14:42:43 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:43.633 14:42:43 -- json_config/json_config.sh@317 -- # [[ -n 106861 ]] 00:05:43.633 14:42:43 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:43.633 14:42:43 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:43.633 14:42:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:43.633 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.633 14:42:43 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:43.633 14:42:43 -- json_config/json_config.sh@193 -- # uname -s 00:05:43.633 14:42:43 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:43.633 14:42:43 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:43.633 14:42:43 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:43.633 14:42:43 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:43.633 14:42:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:43.633 14:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.633 14:42:43 -- json_config/json_config.sh@323 -- # killprocess 106861 00:05:43.633 14:42:43 -- common/autotest_common.sh@936 -- # '[' -z 106861 ']' 00:05:43.633 14:42:43 -- common/autotest_common.sh@940 -- # kill -0 106861 00:05:43.633 14:42:43 -- common/autotest_common.sh@941 -- # uname 00:05:43.633 14:42:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:43.633 14:42:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106861 00:05:43.633 14:42:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:43.633 14:42:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:43.633 14:42:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106861' 00:05:43.633 killing process with pid 106861 00:05:43.633 14:42:43 -- common/autotest_common.sh@955 -- # kill 106861 00:05:43.633 14:42:43 -- common/autotest_common.sh@960 -- # wait 106861 00:05:44.199 [2024-04-26 14:42:44.107221] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:46.724 14:42:46 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.724 14:42:46 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:46.724 14:42:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:46.724 14:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.984 14:42:46 -- json_config/json_config.sh@328 -- # return 0 00:05:46.984 14:42:46 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:46.984 INFO: Success 00:05:46.984 14:42:46 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:46.984 14:42:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:46.984 14:42:46 -- nvmf/common.sh@117 -- # sync 00:05:46.984 14:42:46 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:05:46.984 00:05:46.984 real 0m22.913s 00:05:46.984 user 0m25.221s 00:05:46.984 sys 0m4.019s 00:05:46.984 14:42:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.984 14:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.984 ************************************ 00:05:46.984 END TEST json_config 00:05:46.984 ************************************ 00:05:46.984 14:42:46 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.984 14:42:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.984 14:42:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.984 14:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:46.984 ************************************ 00:05:46.984 START TEST json_config_extra_key 00:05:46.984 ************************************ 00:05:46.984 14:42:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.984 14:42:46 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.984 14:42:46 -- nvmf/common.sh@7 -- # uname -s 00:05:46.984 14:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.984 14:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.984 14:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.984 14:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.984 14:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.984 14:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.984 14:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.984 14:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.984 14:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.984 14:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.984 14:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:05:46.984 14:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:05:46.984 14:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.984 14:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.984 14:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.984 14:42:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.984 14:42:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:46.984 14:42:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.984 14:42:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.984 14:42:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.984 14:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.984 14:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.984 14:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.984 14:42:46 -- paths/export.sh@5 -- # export PATH 00:05:46.984 14:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.984 14:42:46 -- nvmf/common.sh@47 -- # : 0 00:05:46.984 14:42:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.984 14:42:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.984 14:42:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.984 14:42:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.984 14:42:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.984 14:42:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.984 INFO: launching applications... 00:05:46.984 14:42:47 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.985 14:42:47 -- json_config/common.sh@9 -- # local app=target 00:05:46.985 14:42:47 -- json_config/common.sh@10 -- # shift 00:05:46.985 14:42:47 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.985 14:42:47 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.985 14:42:47 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.985 14:42:47 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.985 14:42:47 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.985 14:42:47 -- json_config/common.sh@22 -- # app_pid["$app"]=108164 00:05:46.985 14:42:47 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.985 Waiting for target to run... 00:05:46.985 14:42:47 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.985 14:42:47 -- json_config/common.sh@25 -- # waitforlisten 108164 /var/tmp/spdk_tgt.sock 00:05:46.985 14:42:47 -- common/autotest_common.sh@817 -- # '[' -z 108164 ']' 00:05:46.985 14:42:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.985 14:42:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.985 14:42:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.985 14:42:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.985 14:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:47.243 [2024-04-26 14:42:47.093762] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:47.243 [2024-04-26 14:42:47.093900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108164 ] 00:05:47.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.502 [2024-04-26 14:42:47.494844] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.760 [2024-04-26 14:42:47.672305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.327 14:42:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.327 14:42:48 -- common/autotest_common.sh@850 -- # return 0 00:05:48.327 14:42:48 -- json_config/common.sh@26 -- # echo '' 00:05:48.327 00:05:48.327 14:42:48 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.327 INFO: shutting down applications... 00:05:48.327 14:42:48 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.327 14:42:48 -- json_config/common.sh@31 -- # local app=target 00:05:48.327 14:42:48 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.327 14:42:48 -- json_config/common.sh@35 -- # [[ -n 108164 ]] 00:05:48.327 14:42:48 -- json_config/common.sh@38 -- # kill -SIGINT 108164 00:05:48.327 14:42:48 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.327 14:42:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.327 14:42:48 -- json_config/common.sh@41 -- # kill -0 108164 00:05:48.327 14:42:48 -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.893 14:42:48 -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.893 14:42:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.893 14:42:48 -- json_config/common.sh@41 -- # kill -0 108164 00:05:48.893 14:42:48 -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.460 14:42:49 -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.460 14:42:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.460 14:42:49 -- json_config/common.sh@41 -- # kill -0 108164 00:05:49.460 14:42:49 -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.719 14:42:49 -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.719 14:42:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.719 14:42:49 -- json_config/common.sh@41 -- # kill -0 108164 00:05:49.719 14:42:49 -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.287 14:42:50 -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.287 14:42:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.287 14:42:50 -- json_config/common.sh@41 -- # kill -0 108164 00:05:50.287 14:42:50 -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.856 14:42:50 -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.856 14:42:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.856 14:42:50 -- json_config/common.sh@41 -- # kill -0 108164 00:05:50.856 14:42:50 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.856 14:42:50 -- json_config/common.sh@43 -- # break 00:05:50.856 14:42:50 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.856 14:42:50 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.856 SPDK target shutdown done 00:05:50.856 14:42:50 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.856 Success 00:05:50.856 00:05:50.856 real 0m3.803s 00:05:50.856 user 0m3.517s 00:05:50.856 sys 0m0.601s 00:05:50.856 14:42:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.856 14:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.856 ************************************ 00:05:50.856 END TEST json_config_extra_key 00:05:50.856 ************************************ 00:05:50.856 14:42:50 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.856 14:42:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.856 14:42:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.856 14:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:50.856 ************************************ 00:05:50.856 START TEST alias_rpc 00:05:50.856 ************************************ 00:05:50.856 14:42:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.115 * Looking for test storage... 00:05:51.115 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:51.115 14:42:50 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.115 14:42:50 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=108645 00:05:51.115 14:42:50 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.115 14:42:50 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 108645 00:05:51.115 14:42:50 -- common/autotest_common.sh@817 -- # '[' -z 108645 ']' 00:05:51.115 14:42:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.115 14:42:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.115 14:42:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.115 14:42:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.115 14:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:51.115 [2024-04-26 14:42:51.030536] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:51.115 [2024-04-26 14:42:51.030680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108645 ] 00:05:51.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.115 [2024-04-26 14:42:51.154492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.373 [2024-04-26 14:42:51.362070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.308 14:42:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.308 14:42:52 -- common/autotest_common.sh@850 -- # return 0 00:05:52.308 14:42:52 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:52.308 14:42:52 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 108645 00:05:52.308 14:42:52 -- common/autotest_common.sh@936 -- # '[' -z 108645 ']' 00:05:52.308 14:42:52 -- common/autotest_common.sh@940 -- # kill -0 108645 00:05:52.308 14:42:52 -- common/autotest_common.sh@941 -- # uname 00:05:52.308 14:42:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.308 14:42:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108645 00:05:52.308 14:42:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.308 14:42:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.308 14:42:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108645' 00:05:52.308 killing process with pid 108645 00:05:52.308 14:42:52 -- common/autotest_common.sh@955 -- # kill 108645 00:05:52.308 14:42:52 -- common/autotest_common.sh@960 -- # wait 108645 00:05:54.835 00:05:54.835 real 0m3.533s 00:05:54.835 user 0m3.645s 00:05:54.835 sys 0m0.591s 00:05:54.835 14:42:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.835 14:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 ************************************ 00:05:54.836 END TEST alias_rpc 00:05:54.836 ************************************ 00:05:54.836 14:42:54 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:54.836 14:42:54 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.836 14:42:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.836 14:42:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.836 14:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 ************************************ 00:05:54.836 START TEST spdkcli_tcp 00:05:54.836 ************************************ 00:05:54.836 14:42:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.836 * Looking for test storage... 00:05:54.836 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:54.836 14:42:54 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:54.836 14:42:54 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.836 14:42:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:54.836 14:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=109109 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.836 14:42:54 -- spdkcli/tcp.sh@27 -- # waitforlisten 109109 00:05:54.836 14:42:54 -- common/autotest_common.sh@817 -- # '[' -z 109109 ']' 00:05:54.836 14:42:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.836 14:42:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.836 14:42:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.836 14:42:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.836 14:42:54 -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 [2024-04-26 14:42:54.698588] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:54.836 [2024-04-26 14:42:54.698726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109109 ] 00:05:54.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.836 [2024-04-26 14:42:54.821939] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.093 [2024-04-26 14:42:55.037680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.093 [2024-04-26 14:42:55.037683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.027 14:42:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.027 14:42:55 -- common/autotest_common.sh@850 -- # return 0 00:05:56.027 14:42:55 -- spdkcli/tcp.sh@31 -- # socat_pid=109248 00:05:56.027 14:42:55 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.027 14:42:55 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.027 [ 00:05:56.027 "bdev_malloc_delete", 00:05:56.027 "bdev_malloc_create", 00:05:56.027 "bdev_null_resize", 00:05:56.027 "bdev_null_delete", 00:05:56.027 "bdev_null_create", 00:05:56.027 "bdev_nvme_cuse_unregister", 00:05:56.027 "bdev_nvme_cuse_register", 00:05:56.027 "bdev_opal_new_user", 00:05:56.027 "bdev_opal_set_lock_state", 00:05:56.027 "bdev_opal_delete", 00:05:56.027 "bdev_opal_get_info", 00:05:56.027 "bdev_opal_create", 00:05:56.027 "bdev_nvme_opal_revert", 00:05:56.027 "bdev_nvme_opal_init", 00:05:56.027 "bdev_nvme_send_cmd", 00:05:56.027 "bdev_nvme_get_path_iostat", 00:05:56.027 "bdev_nvme_get_mdns_discovery_info", 00:05:56.027 "bdev_nvme_stop_mdns_discovery", 00:05:56.027 "bdev_nvme_start_mdns_discovery", 00:05:56.027 "bdev_nvme_set_multipath_policy", 00:05:56.027 "bdev_nvme_set_preferred_path", 00:05:56.027 "bdev_nvme_get_io_paths", 00:05:56.027 "bdev_nvme_remove_error_injection", 00:05:56.027 "bdev_nvme_add_error_injection", 00:05:56.027 "bdev_nvme_get_discovery_info", 00:05:56.027 "bdev_nvme_stop_discovery", 00:05:56.027 "bdev_nvme_start_discovery", 00:05:56.027 "bdev_nvme_get_controller_health_info", 00:05:56.027 "bdev_nvme_disable_controller", 00:05:56.027 "bdev_nvme_enable_controller", 00:05:56.027 "bdev_nvme_reset_controller", 00:05:56.027 "bdev_nvme_get_transport_statistics", 00:05:56.027 "bdev_nvme_apply_firmware", 00:05:56.027 "bdev_nvme_detach_controller", 00:05:56.027 "bdev_nvme_get_controllers", 00:05:56.027 "bdev_nvme_attach_controller", 00:05:56.027 "bdev_nvme_set_hotplug", 00:05:56.027 "bdev_nvme_set_options", 00:05:56.027 "bdev_passthru_delete", 00:05:56.027 "bdev_passthru_create", 00:05:56.027 "bdev_lvol_grow_lvstore", 00:05:56.027 "bdev_lvol_get_lvols", 00:05:56.027 "bdev_lvol_get_lvstores", 00:05:56.027 "bdev_lvol_delete", 00:05:56.027 "bdev_lvol_set_read_only", 00:05:56.027 "bdev_lvol_resize", 00:05:56.027 "bdev_lvol_decouple_parent", 00:05:56.027 "bdev_lvol_inflate", 00:05:56.027 "bdev_lvol_rename", 00:05:56.027 "bdev_lvol_clone_bdev", 00:05:56.027 "bdev_lvol_clone", 00:05:56.027 "bdev_lvol_snapshot", 00:05:56.027 "bdev_lvol_create", 00:05:56.027 "bdev_lvol_delete_lvstore", 00:05:56.027 "bdev_lvol_rename_lvstore", 00:05:56.027 "bdev_lvol_create_lvstore", 00:05:56.027 "bdev_raid_set_options", 00:05:56.027 "bdev_raid_remove_base_bdev", 00:05:56.027 "bdev_raid_add_base_bdev", 00:05:56.027 "bdev_raid_delete", 00:05:56.027 "bdev_raid_create", 00:05:56.027 "bdev_raid_get_bdevs", 00:05:56.027 "bdev_error_inject_error", 00:05:56.027 "bdev_error_delete", 00:05:56.027 "bdev_error_create", 00:05:56.027 "bdev_split_delete", 00:05:56.027 "bdev_split_create", 00:05:56.027 "bdev_delay_delete", 00:05:56.027 "bdev_delay_create", 00:05:56.027 "bdev_delay_update_latency", 00:05:56.027 "bdev_zone_block_delete", 00:05:56.027 "bdev_zone_block_create", 00:05:56.027 "blobfs_create", 00:05:56.027 "blobfs_detect", 00:05:56.027 "blobfs_set_cache_size", 00:05:56.027 "bdev_aio_delete", 00:05:56.027 "bdev_aio_rescan", 00:05:56.027 "bdev_aio_create", 00:05:56.027 "bdev_ftl_set_property", 00:05:56.027 "bdev_ftl_get_properties", 00:05:56.027 "bdev_ftl_get_stats", 00:05:56.027 "bdev_ftl_unmap", 00:05:56.027 "bdev_ftl_unload", 00:05:56.027 "bdev_ftl_delete", 00:05:56.027 "bdev_ftl_load", 00:05:56.027 "bdev_ftl_create", 00:05:56.027 "bdev_virtio_attach_controller", 00:05:56.027 "bdev_virtio_scsi_get_devices", 00:05:56.027 "bdev_virtio_detach_controller", 00:05:56.027 "bdev_virtio_blk_set_hotplug", 00:05:56.027 "bdev_iscsi_delete", 00:05:56.027 "bdev_iscsi_create", 00:05:56.027 "bdev_iscsi_set_options", 00:05:56.027 "accel_error_inject_error", 00:05:56.027 "ioat_scan_accel_module", 00:05:56.027 "dsa_scan_accel_module", 00:05:56.027 "iaa_scan_accel_module", 00:05:56.027 "keyring_file_remove_key", 00:05:56.027 "keyring_file_add_key", 00:05:56.027 "iscsi_get_histogram", 00:05:56.027 "iscsi_enable_histogram", 00:05:56.027 "iscsi_set_options", 00:05:56.027 "iscsi_get_auth_groups", 00:05:56.027 "iscsi_auth_group_remove_secret", 00:05:56.027 "iscsi_auth_group_add_secret", 00:05:56.027 "iscsi_delete_auth_group", 00:05:56.027 "iscsi_create_auth_group", 00:05:56.027 "iscsi_set_discovery_auth", 00:05:56.027 "iscsi_get_options", 00:05:56.027 "iscsi_target_node_request_logout", 00:05:56.027 "iscsi_target_node_set_redirect", 00:05:56.027 "iscsi_target_node_set_auth", 00:05:56.027 "iscsi_target_node_add_lun", 00:05:56.027 "iscsi_get_stats", 00:05:56.027 "iscsi_get_connections", 00:05:56.027 "iscsi_portal_group_set_auth", 00:05:56.027 "iscsi_start_portal_group", 00:05:56.027 "iscsi_delete_portal_group", 00:05:56.028 "iscsi_create_portal_group", 00:05:56.028 "iscsi_get_portal_groups", 00:05:56.028 "iscsi_delete_target_node", 00:05:56.028 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.028 "iscsi_target_node_add_pg_ig_maps", 00:05:56.028 "iscsi_create_target_node", 00:05:56.028 "iscsi_get_target_nodes", 00:05:56.028 "iscsi_delete_initiator_group", 00:05:56.028 "iscsi_initiator_group_remove_initiators", 00:05:56.028 "iscsi_initiator_group_add_initiators", 00:05:56.028 "iscsi_create_initiator_group", 00:05:56.028 "iscsi_get_initiator_groups", 00:05:56.028 "nvmf_set_crdt", 00:05:56.028 "nvmf_set_config", 00:05:56.028 "nvmf_set_max_subsystems", 00:05:56.028 "nvmf_subsystem_get_listeners", 00:05:56.028 "nvmf_subsystem_get_qpairs", 00:05:56.028 "nvmf_subsystem_get_controllers", 00:05:56.028 "nvmf_get_stats", 00:05:56.028 "nvmf_get_transports", 00:05:56.028 "nvmf_create_transport", 00:05:56.028 "nvmf_get_targets", 00:05:56.028 "nvmf_delete_target", 00:05:56.028 "nvmf_create_target", 00:05:56.028 "nvmf_subsystem_allow_any_host", 00:05:56.028 "nvmf_subsystem_remove_host", 00:05:56.028 "nvmf_subsystem_add_host", 00:05:56.028 "nvmf_ns_remove_host", 00:05:56.028 "nvmf_ns_add_host", 00:05:56.028 "nvmf_subsystem_remove_ns", 00:05:56.028 "nvmf_subsystem_add_ns", 00:05:56.028 "nvmf_subsystem_listener_set_ana_state", 00:05:56.028 "nvmf_discovery_get_referrals", 00:05:56.028 "nvmf_discovery_remove_referral", 00:05:56.028 "nvmf_discovery_add_referral", 00:05:56.028 "nvmf_subsystem_remove_listener", 00:05:56.028 "nvmf_subsystem_add_listener", 00:05:56.028 "nvmf_delete_subsystem", 00:05:56.028 "nvmf_create_subsystem", 00:05:56.028 "nvmf_get_subsystems", 00:05:56.028 "env_dpdk_get_mem_stats", 00:05:56.028 "nbd_get_disks", 00:05:56.028 "nbd_stop_disk", 00:05:56.028 "nbd_start_disk", 00:05:56.028 "ublk_recover_disk", 00:05:56.028 "ublk_get_disks", 00:05:56.028 "ublk_stop_disk", 00:05:56.028 "ublk_start_disk", 00:05:56.028 "ublk_destroy_target", 00:05:56.028 "ublk_create_target", 00:05:56.028 "virtio_blk_create_transport", 00:05:56.028 "virtio_blk_get_transports", 00:05:56.028 "vhost_controller_set_coalescing", 00:05:56.028 "vhost_get_controllers", 00:05:56.028 "vhost_delete_controller", 00:05:56.028 "vhost_create_blk_controller", 00:05:56.028 "vhost_scsi_controller_remove_target", 00:05:56.028 "vhost_scsi_controller_add_target", 00:05:56.028 "vhost_start_scsi_controller", 00:05:56.028 "vhost_create_scsi_controller", 00:05:56.028 "thread_set_cpumask", 00:05:56.028 "framework_get_scheduler", 00:05:56.028 "framework_set_scheduler", 00:05:56.028 "framework_get_reactors", 00:05:56.028 "thread_get_io_channels", 00:05:56.028 "thread_get_pollers", 00:05:56.028 "thread_get_stats", 00:05:56.028 "framework_monitor_context_switch", 00:05:56.028 "spdk_kill_instance", 00:05:56.028 "log_enable_timestamps", 00:05:56.028 "log_get_flags", 00:05:56.028 "log_clear_flag", 00:05:56.028 "log_set_flag", 00:05:56.028 "log_get_level", 00:05:56.028 "log_set_level", 00:05:56.028 "log_get_print_level", 00:05:56.028 "log_set_print_level", 00:05:56.028 "framework_enable_cpumask_locks", 00:05:56.028 "framework_disable_cpumask_locks", 00:05:56.028 "framework_wait_init", 00:05:56.028 "framework_start_init", 00:05:56.028 "scsi_get_devices", 00:05:56.028 "bdev_get_histogram", 00:05:56.028 "bdev_enable_histogram", 00:05:56.028 "bdev_set_qos_limit", 00:05:56.028 "bdev_set_qd_sampling_period", 00:05:56.028 "bdev_get_bdevs", 00:05:56.028 "bdev_reset_iostat", 00:05:56.028 "bdev_get_iostat", 00:05:56.028 "bdev_examine", 00:05:56.028 "bdev_wait_for_examine", 00:05:56.028 "bdev_set_options", 00:05:56.028 "notify_get_notifications", 00:05:56.028 "notify_get_types", 00:05:56.028 "accel_get_stats", 00:05:56.028 "accel_set_options", 00:05:56.028 "accel_set_driver", 00:05:56.028 "accel_crypto_key_destroy", 00:05:56.028 "accel_crypto_keys_get", 00:05:56.028 "accel_crypto_key_create", 00:05:56.028 "accel_assign_opc", 00:05:56.028 "accel_get_module_info", 00:05:56.028 "accel_get_opc_assignments", 00:05:56.028 "vmd_rescan", 00:05:56.028 "vmd_remove_device", 00:05:56.028 "vmd_enable", 00:05:56.028 "sock_get_default_impl", 00:05:56.028 "sock_set_default_impl", 00:05:56.028 "sock_impl_set_options", 00:05:56.028 "sock_impl_get_options", 00:05:56.028 "iobuf_get_stats", 00:05:56.028 "iobuf_set_options", 00:05:56.028 "framework_get_pci_devices", 00:05:56.028 "framework_get_config", 00:05:56.028 "framework_get_subsystems", 00:05:56.028 "trace_get_info", 00:05:56.028 "trace_get_tpoint_group_mask", 00:05:56.028 "trace_disable_tpoint_group", 00:05:56.028 "trace_enable_tpoint_group", 00:05:56.028 "trace_clear_tpoint_mask", 00:05:56.028 "trace_set_tpoint_mask", 00:05:56.028 "keyring_get_keys", 00:05:56.028 "spdk_get_version", 00:05:56.028 "rpc_get_methods" 00:05:56.028 ] 00:05:56.028 14:42:56 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.028 14:42:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:56.028 14:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:56.028 14:42:56 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.028 14:42:56 -- spdkcli/tcp.sh@38 -- # killprocess 109109 00:05:56.028 14:42:56 -- common/autotest_common.sh@936 -- # '[' -z 109109 ']' 00:05:56.028 14:42:56 -- common/autotest_common.sh@940 -- # kill -0 109109 00:05:56.028 14:42:56 -- common/autotest_common.sh@941 -- # uname 00:05:56.028 14:42:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.028 14:42:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 109109 00:05:56.028 14:42:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.028 14:42:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.028 14:42:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 109109' 00:05:56.028 killing process with pid 109109 00:05:56.028 14:42:56 -- common/autotest_common.sh@955 -- # kill 109109 00:05:56.028 14:42:56 -- common/autotest_common.sh@960 -- # wait 109109 00:05:58.560 00:05:58.560 real 0m3.614s 00:05:58.560 user 0m6.496s 00:05:58.560 sys 0m0.612s 00:05:58.560 14:42:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.560 14:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:58.560 ************************************ 00:05:58.560 END TEST spdkcli_tcp 00:05:58.560 ************************************ 00:05:58.560 14:42:58 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.560 14:42:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.560 14:42:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.561 14:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 ************************************ 00:05:58.561 START TEST dpdk_mem_utility 00:05:58.561 ************************************ 00:05:58.561 14:42:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.561 * Looking for test storage... 00:05:58.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:58.561 14:42:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.561 14:42:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=109710 00:05:58.561 14:42:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.561 14:42:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 109710 00:05:58.561 14:42:58 -- common/autotest_common.sh@817 -- # '[' -z 109710 ']' 00:05:58.561 14:42:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.561 14:42:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.561 14:42:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.561 14:42:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.561 14:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:58.561 [2024-04-26 14:42:58.419156] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:58.561 [2024-04-26 14:42:58.419293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109710 ] 00:05:58.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.561 [2024-04-26 14:42:58.537100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.820 [2024-04-26 14:42:58.743362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.756 14:42:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.756 14:42:59 -- common/autotest_common.sh@850 -- # return 0 00:05:59.756 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.756 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.756 14:42:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:59.756 14:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:59.756 { 00:05:59.756 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.756 } 00:05:59.756 14:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:59.756 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.756 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:59.756 1 heaps totaling size 820.000000 MiB 00:05:59.756 size: 820.000000 MiB heap id: 0 00:05:59.756 end heaps---------- 00:05:59.756 8 mempools totaling size 598.116089 MiB 00:05:59.756 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.756 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.756 size: 84.521057 MiB name: bdev_io_109710 00:05:59.756 size: 51.011292 MiB name: evtpool_109710 00:05:59.756 size: 50.003479 MiB name: msgpool_109710 00:05:59.756 size: 21.763794 MiB name: PDU_Pool 00:05:59.756 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.756 size: 0.026123 MiB name: Session_Pool 00:05:59.756 end mempools------- 00:05:59.756 6 memzones totaling size 4.142822 MiB 00:05:59.756 size: 1.000366 MiB name: RG_ring_0_109710 00:05:59.756 size: 1.000366 MiB name: RG_ring_1_109710 00:05:59.756 size: 1.000366 MiB name: RG_ring_4_109710 00:05:59.756 size: 1.000366 MiB name: RG_ring_5_109710 00:05:59.756 size: 0.125366 MiB name: RG_ring_2_109710 00:05:59.756 size: 0.015991 MiB name: RG_ring_3_109710 00:05:59.756 end memzones------- 00:05:59.756 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.756 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:59.756 list of free elements. size: 18.514832 MiB 00:05:59.756 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:59.756 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:59.757 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:59.757 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:59.757 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:59.757 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:59.757 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:59.757 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:59.757 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:59.757 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:59.757 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:59.757 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:59.757 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:59.757 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:59.757 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:59.757 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:59.757 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:59.757 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:59.757 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:59.757 list of standard malloc elements. size: 199.220764 MiB 00:05:59.757 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:59.757 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:59.757 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:59.757 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:59.757 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:59.757 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:59.757 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:59.757 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:59.757 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:59.757 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:59.757 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:59.757 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:59.757 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:59.757 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:59.757 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:59.757 list of memzone associated elements. size: 602.264404 MiB 00:05:59.757 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:59.757 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.757 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:59.757 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.757 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:59.757 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_109710_0 00:05:59.757 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:59.757 associated memzone info: size: 48.002930 MiB name: MP_evtpool_109710_0 00:05:59.757 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:59.757 associated memzone info: size: 48.002930 MiB name: MP_msgpool_109710_0 00:05:59.757 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:59.757 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.757 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:59.757 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.757 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:59.757 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_109710 00:05:59.757 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:59.757 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_109710 00:05:59.757 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:59.757 associated memzone info: size: 1.007996 MiB name: MP_evtpool_109710 00:05:59.757 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:59.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.757 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:59.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.757 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:59.757 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.757 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:59.757 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.757 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:59.757 associated memzone info: size: 1.000366 MiB name: RG_ring_0_109710 00:05:59.757 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:59.757 associated memzone info: size: 1.000366 MiB name: RG_ring_1_109710 00:05:59.757 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:59.757 associated memzone info: size: 1.000366 MiB name: RG_ring_4_109710 00:05:59.757 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:59.757 associated memzone info: size: 1.000366 MiB name: RG_ring_5_109710 00:05:59.757 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:59.757 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_109710 00:05:59.757 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:59.757 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.757 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:59.757 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.757 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:59.757 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.757 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:59.757 associated memzone info: size: 0.125366 MiB name: RG_ring_2_109710 00:05:59.757 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:59.757 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.757 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:59.757 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.757 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:59.757 associated memzone info: size: 0.015991 MiB name: RG_ring_3_109710 00:05:59.757 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:59.757 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.757 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:59.757 associated memzone info: size: 0.000183 MiB name: MP_msgpool_109710 00:05:59.757 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:59.757 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_109710 00:05:59.757 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:59.757 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.757 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.757 14:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 109710 00:05:59.757 14:42:59 -- common/autotest_common.sh@936 -- # '[' -z 109710 ']' 00:05:59.757 14:42:59 -- common/autotest_common.sh@940 -- # kill -0 109710 00:05:59.757 14:42:59 -- common/autotest_common.sh@941 -- # uname 00:05:59.757 14:42:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.757 14:42:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 109710 00:05:59.757 14:42:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.757 14:42:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.757 14:42:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 109710' 00:05:59.757 killing process with pid 109710 00:05:59.757 14:42:59 -- common/autotest_common.sh@955 -- # kill 109710 00:05:59.757 14:42:59 -- common/autotest_common.sh@960 -- # wait 109710 00:06:01.660 00:06:01.660 real 0m3.443s 00:06:01.660 user 0m3.451s 00:06:01.660 sys 0m0.561s 00:06:01.660 14:43:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.660 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.660 ************************************ 00:06:01.660 END TEST dpdk_mem_utility 00:06:01.660 ************************************ 00:06:01.967 14:43:01 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:01.967 14:43:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.967 14:43:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.967 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.967 ************************************ 00:06:01.967 START TEST event 00:06:01.967 ************************************ 00:06:01.967 14:43:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:01.967 * Looking for test storage... 00:06:01.967 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:01.967 14:43:01 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:01.967 14:43:01 -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.967 14:43:01 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:01.968 14:43:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:01.968 14:43:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.968 14:43:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.968 ************************************ 00:06:01.968 START TEST event_perf 00:06:01.968 ************************************ 00:06:01.968 14:43:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.239 Running I/O for 1 seconds...[2024-04-26 14:43:02.030772] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:02.239 [2024-04-26 14:43:02.030881] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110182 ] 00:06:02.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.239 [2024-04-26 14:43:02.151311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.522 [2024-04-26 14:43:02.364623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.522 [2024-04-26 14:43:02.364671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.522 [2024-04-26 14:43:02.364807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.522 [2024-04-26 14:43:02.364809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.967 Running I/O for 1 seconds... 00:06:03.967 lcore 0: 223104 00:06:03.967 lcore 1: 223104 00:06:03.967 lcore 2: 223104 00:06:03.967 lcore 3: 223105 00:06:03.967 done. 00:06:03.967 00:06:03.967 real 0m1.716s 00:06:03.967 user 0m4.558s 00:06:03.967 sys 0m0.143s 00:06:03.967 14:43:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.967 14:43:03 -- common/autotest_common.sh@10 -- # set +x 00:06:03.967 ************************************ 00:06:03.967 END TEST event_perf 00:06:03.967 ************************************ 00:06:03.967 14:43:03 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.967 14:43:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:03.967 14:43:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.967 14:43:03 -- common/autotest_common.sh@10 -- # set +x 00:06:03.967 ************************************ 00:06:03.967 START TEST event_reactor 00:06:03.967 ************************************ 00:06:03.967 14:43:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.967 [2024-04-26 14:43:03.868157] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:03.967 [2024-04-26 14:43:03.868287] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110380 ] 00:06:03.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.968 [2024-04-26 14:43:03.986991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.242 [2024-04-26 14:43:04.196368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.697 test_start 00:06:05.697 oneshot 00:06:05.697 tick 100 00:06:05.697 tick 100 00:06:05.697 tick 250 00:06:05.697 tick 100 00:06:05.697 tick 100 00:06:05.697 tick 100 00:06:05.697 tick 250 00:06:05.697 tick 500 00:06:05.697 tick 100 00:06:05.697 tick 100 00:06:05.697 tick 250 00:06:05.697 tick 100 00:06:05.697 tick 100 00:06:05.697 test_end 00:06:05.697 00:06:05.697 real 0m1.706s 00:06:05.697 user 0m1.558s 00:06:05.697 sys 0m0.139s 00:06:05.697 14:43:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.697 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:05.697 ************************************ 00:06:05.697 END TEST event_reactor 00:06:05.697 ************************************ 00:06:05.697 14:43:05 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.697 14:43:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:05.697 14:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.697 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:05.697 ************************************ 00:06:05.697 START TEST event_reactor_perf 00:06:05.697 ************************************ 00:06:05.697 14:43:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.697 [2024-04-26 14:43:05.699447] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:05.697 [2024-04-26 14:43:05.699554] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110663 ] 00:06:05.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.971 [2024-04-26 14:43:05.816776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.971 [2024-04-26 14:43:06.022124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.413 test_start 00:06:07.413 test_end 00:06:07.413 Performance: 328844 events per second 00:06:07.413 00:06:07.413 real 0m1.703s 00:06:07.413 user 0m1.547s 00:06:07.413 sys 0m0.147s 00:06:07.413 14:43:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.413 14:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:07.413 ************************************ 00:06:07.413 END TEST event_reactor_perf 00:06:07.413 ************************************ 00:06:07.413 14:43:07 -- event/event.sh@49 -- # uname -s 00:06:07.413 14:43:07 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.413 14:43:07 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.413 14:43:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.413 14:43:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.413 14:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:07.672 ************************************ 00:06:07.672 START TEST event_scheduler 00:06:07.672 ************************************ 00:06:07.672 14:43:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.672 * Looking for test storage... 00:06:07.672 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:07.672 14:43:07 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.672 14:43:07 -- scheduler/scheduler.sh@35 -- # scheduler_pid=110991 00:06:07.672 14:43:07 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.672 14:43:07 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.672 14:43:07 -- scheduler/scheduler.sh@37 -- # waitforlisten 110991 00:06:07.673 14:43:07 -- common/autotest_common.sh@817 -- # '[' -z 110991 ']' 00:06:07.673 14:43:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.673 14:43:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.673 14:43:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.673 14:43:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.673 14:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:07.673 [2024-04-26 14:43:07.629519] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:07.673 [2024-04-26 14:43:07.629681] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110991 ] 00:06:07.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.673 [2024-04-26 14:43:07.746533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.931 [2024-04-26 14:43:07.958858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.931 [2024-04-26 14:43:07.958973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.931 [2024-04-26 14:43:07.959013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.931 [2024-04-26 14:43:07.959042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.497 14:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.497 14:43:08 -- common/autotest_common.sh@850 -- # return 0 00:06:08.497 14:43:08 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.497 14:43:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.497 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.497 POWER: Env isn't set yet! 00:06:08.497 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.497 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:08.497 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:08.497 POWER: Cannot get available frequencies of lcore 0 00:06:08.497 POWER: Attempting to initialise PSTAT power management... 00:06:08.497 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:08.497 POWER: Initialized successfully for lcore 0 power management 00:06:08.497 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:08.497 POWER: Initialized successfully for lcore 1 power management 00:06:08.497 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:08.497 POWER: Initialized successfully for lcore 2 power management 00:06:08.497 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:08.498 POWER: Initialized successfully for lcore 3 power management 00:06:08.498 14:43:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:08.498 14:43:08 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.498 14:43:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.498 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.065 [2024-04-26 14:43:08.919004] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:09.065 14:43:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.065 14:43:08 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:09.065 14:43:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.065 14:43:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.065 14:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.065 ************************************ 00:06:09.065 START TEST scheduler_create_thread 00:06:09.065 ************************************ 00:06:09.065 14:43:09 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:09.065 14:43:09 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:09.065 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.065 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.065 2 00:06:09.065 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.065 14:43:09 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:09.065 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.065 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.065 3 00:06:09.065 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.065 14:43:09 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:09.065 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 4 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 5 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 6 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 7 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 8 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 9 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 10 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.066 14:43:09 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.066 14:43:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 14:43:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.066 00:06:09.066 real 0m0.111s 00:06:09.066 user 0m0.011s 00:06:09.066 sys 0m0.003s 00:06:09.066 14:43:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.066 14:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:09.066 ************************************ 00:06:09.066 END TEST scheduler_create_thread 00:06:09.066 ************************************ 00:06:09.324 14:43:09 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.324 14:43:09 -- scheduler/scheduler.sh@46 -- # killprocess 110991 00:06:09.324 14:43:09 -- common/autotest_common.sh@936 -- # '[' -z 110991 ']' 00:06:09.324 14:43:09 -- common/autotest_common.sh@940 -- # kill -0 110991 00:06:09.324 14:43:09 -- common/autotest_common.sh@941 -- # uname 00:06:09.324 14:43:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.324 14:43:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110991 00:06:09.324 14:43:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:09.324 14:43:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:09.324 14:43:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110991' 00:06:09.324 killing process with pid 110991 00:06:09.324 14:43:09 -- common/autotest_common.sh@955 -- # kill 110991 00:06:09.324 14:43:09 -- common/autotest_common.sh@960 -- # wait 110991 00:06:09.582 [2024-04-26 14:43:09.627493] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.517 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:10.517 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:10.517 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:10.517 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:10.517 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:10.517 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:10.517 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:10.517 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:10.775 00:06:10.776 real 0m3.276s 00:06:10.776 user 0m6.049s 00:06:10.776 sys 0m0.531s 00:06:10.776 14:43:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.776 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:10.776 ************************************ 00:06:10.776 END TEST event_scheduler 00:06:10.776 ************************************ 00:06:10.776 14:43:10 -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.776 14:43:10 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.776 14:43:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.776 14:43:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.776 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.034 ************************************ 00:06:11.034 START TEST app_repeat 00:06:11.034 ************************************ 00:06:11.034 14:43:10 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:11.034 14:43:10 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.034 14:43:10 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.034 14:43:10 -- event/event.sh@13 -- # local nbd_list 00:06:11.034 14:43:10 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.034 14:43:10 -- event/event.sh@14 -- # local bdev_list 00:06:11.034 14:43:10 -- event/event.sh@15 -- # local repeat_times=4 00:06:11.034 14:43:10 -- event/event.sh@17 -- # modprobe nbd 00:06:11.034 14:43:10 -- event/event.sh@19 -- # repeat_pid=111452 00:06:11.034 14:43:10 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.034 14:43:10 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.034 14:43:10 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 111452' 00:06:11.034 Process app_repeat pid: 111452 00:06:11.034 14:43:10 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.034 14:43:10 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.034 spdk_app_start Round 0 00:06:11.034 14:43:10 -- event/event.sh@25 -- # waitforlisten 111452 /var/tmp/spdk-nbd.sock 00:06:11.034 14:43:10 -- common/autotest_common.sh@817 -- # '[' -z 111452 ']' 00:06:11.034 14:43:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.034 14:43:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.034 14:43:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.034 14:43:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.034 14:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.034 [2024-04-26 14:43:10.963285] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:11.034 [2024-04-26 14:43:10.963400] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111452 ] 00:06:11.034 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.034 [2024-04-26 14:43:11.082878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.291 [2024-04-26 14:43:11.332981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.291 [2024-04-26 14:43:11.332984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.858 14:43:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.858 14:43:11 -- common/autotest_common.sh@850 -- # return 0 00:06:11.858 14:43:11 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.423 Malloc0 00:06:12.423 14:43:12 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.681 Malloc1 00:06:12.681 14:43:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@12 -- # local i 00:06:12.681 14:43:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.682 14:43:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.682 14:43:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.939 /dev/nbd0 00:06:12.939 14:43:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.939 14:43:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.939 14:43:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:12.939 14:43:12 -- common/autotest_common.sh@855 -- # local i 00:06:12.939 14:43:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:12.939 14:43:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:12.939 14:43:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:12.939 14:43:12 -- common/autotest_common.sh@859 -- # break 00:06:12.939 14:43:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:12.939 14:43:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:12.939 14:43:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.939 1+0 records in 00:06:12.939 1+0 records out 00:06:12.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018813 s, 21.8 MB/s 00:06:12.939 14:43:12 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.939 14:43:12 -- common/autotest_common.sh@872 -- # size=4096 00:06:12.939 14:43:12 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:12.939 14:43:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:12.939 14:43:12 -- common/autotest_common.sh@875 -- # return 0 00:06:12.939 14:43:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.939 14:43:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.939 14:43:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.198 /dev/nbd1 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.198 14:43:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:13.198 14:43:13 -- common/autotest_common.sh@855 -- # local i 00:06:13.198 14:43:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:13.198 14:43:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:13.198 14:43:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:13.198 14:43:13 -- common/autotest_common.sh@859 -- # break 00:06:13.198 14:43:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:13.198 14:43:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:13.198 14:43:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.198 1+0 records in 00:06:13.198 1+0 records out 00:06:13.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024139 s, 17.0 MB/s 00:06:13.198 14:43:13 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.198 14:43:13 -- common/autotest_common.sh@872 -- # size=4096 00:06:13.198 14:43:13 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.198 14:43:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:13.198 14:43:13 -- common/autotest_common.sh@875 -- # return 0 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.198 14:43:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.455 { 00:06:13.455 "nbd_device": "/dev/nbd0", 00:06:13.455 "bdev_name": "Malloc0" 00:06:13.455 }, 00:06:13.455 { 00:06:13.455 "nbd_device": "/dev/nbd1", 00:06:13.455 "bdev_name": "Malloc1" 00:06:13.455 } 00:06:13.455 ]' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.455 { 00:06:13.455 "nbd_device": "/dev/nbd0", 00:06:13.455 "bdev_name": "Malloc0" 00:06:13.455 }, 00:06:13.455 { 00:06:13.455 "nbd_device": "/dev/nbd1", 00:06:13.455 "bdev_name": "Malloc1" 00:06:13.455 } 00:06:13.455 ]' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.455 /dev/nbd1' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.455 /dev/nbd1' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.455 256+0 records in 00:06:13.455 256+0 records out 00:06:13.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500165 s, 210 MB/s 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.455 256+0 records in 00:06:13.455 256+0 records out 00:06:13.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239369 s, 43.8 MB/s 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.455 14:43:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.713 256+0 records in 00:06:13.713 256+0 records out 00:06:13.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0334574 s, 31.3 MB/s 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@51 -- # local i 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@41 -- # break 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.713 14:43:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@41 -- # break 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.971 14:43:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@65 -- # true 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.537 14:43:14 -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.537 14:43:14 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.795 14:43:14 -- event/event.sh@35 -- # sleep 3 00:06:16.176 [2024-04-26 14:43:16.161878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.435 [2024-04-26 14:43:16.407550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.435 [2024-04-26 14:43:16.407560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.693 [2024-04-26 14:43:16.620713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.693 [2024-04-26 14:43:16.620803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.066 14:43:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:18.066 14:43:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.066 spdk_app_start Round 1 00:06:18.066 14:43:17 -- event/event.sh@25 -- # waitforlisten 111452 /var/tmp/spdk-nbd.sock 00:06:18.066 14:43:17 -- common/autotest_common.sh@817 -- # '[' -z 111452 ']' 00:06:18.066 14:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.066 14:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.066 14:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.066 14:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.066 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:18.066 14:43:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.066 14:43:18 -- common/autotest_common.sh@850 -- # return 0 00:06:18.066 14:43:18 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.324 Malloc0 00:06:18.324 14:43:18 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.889 Malloc1 00:06:18.889 14:43:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.889 /dev/nbd0 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.889 14:43:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.889 14:43:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:18.889 14:43:18 -- common/autotest_common.sh@855 -- # local i 00:06:18.889 14:43:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:18.889 14:43:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:18.889 14:43:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:18.889 14:43:18 -- common/autotest_common.sh@859 -- # break 00:06:19.146 14:43:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:19.146 14:43:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:19.146 14:43:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.146 1+0 records in 00:06:19.146 1+0 records out 00:06:19.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215799 s, 19.0 MB/s 00:06:19.147 14:43:18 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.147 14:43:18 -- common/autotest_common.sh@872 -- # size=4096 00:06:19.147 14:43:18 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.147 14:43:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:19.147 14:43:18 -- common/autotest_common.sh@875 -- # return 0 00:06:19.147 14:43:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.147 14:43:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.147 14:43:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.147 /dev/nbd1 00:06:19.147 14:43:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.147 14:43:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.404 14:43:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:19.404 14:43:19 -- common/autotest_common.sh@855 -- # local i 00:06:19.404 14:43:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:19.404 14:43:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:19.404 14:43:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:19.404 14:43:19 -- common/autotest_common.sh@859 -- # break 00:06:19.404 14:43:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:19.404 14:43:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:19.404 14:43:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.404 1+0 records in 00:06:19.404 1+0 records out 00:06:19.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153847 s, 26.6 MB/s 00:06:19.404 14:43:19 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.404 14:43:19 -- common/autotest_common.sh@872 -- # size=4096 00:06:19.404 14:43:19 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.404 14:43:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:19.404 14:43:19 -- common/autotest_common.sh@875 -- # return 0 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.404 { 00:06:19.404 "nbd_device": "/dev/nbd0", 00:06:19.404 "bdev_name": "Malloc0" 00:06:19.404 }, 00:06:19.404 { 00:06:19.404 "nbd_device": "/dev/nbd1", 00:06:19.404 "bdev_name": "Malloc1" 00:06:19.404 } 00:06:19.404 ]' 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.404 { 00:06:19.404 "nbd_device": "/dev/nbd0", 00:06:19.404 "bdev_name": "Malloc0" 00:06:19.404 }, 00:06:19.404 { 00:06:19.404 "nbd_device": "/dev/nbd1", 00:06:19.404 "bdev_name": "Malloc1" 00:06:19.404 } 00:06:19.404 ]' 00:06:19.404 14:43:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.662 /dev/nbd1' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.662 /dev/nbd1' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.662 256+0 records in 00:06:19.662 256+0 records out 00:06:19.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049304 s, 213 MB/s 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.662 256+0 records in 00:06:19.662 256+0 records out 00:06:19.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280955 s, 37.3 MB/s 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.662 256+0 records in 00:06:19.662 256+0 records out 00:06:19.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312812 s, 33.5 MB/s 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.662 14:43:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@41 -- # break 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.919 14:43:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@41 -- # break 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.177 14:43:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@65 -- # true 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.434 14:43:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.434 14:43:20 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.000 14:43:20 -- event/event.sh@35 -- # sleep 3 00:06:22.374 [2024-04-26 14:43:22.202814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.374 [2024-04-26 14:43:22.446257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.374 [2024-04-26 14:43:22.446257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.632 [2024-04-26 14:43:22.649757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.632 [2024-04-26 14:43:22.649835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.007 14:43:23 -- event/event.sh@23 -- # for i in {0..2} 00:06:24.007 14:43:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.007 spdk_app_start Round 2 00:06:24.007 14:43:23 -- event/event.sh@25 -- # waitforlisten 111452 /var/tmp/spdk-nbd.sock 00:06:24.007 14:43:23 -- common/autotest_common.sh@817 -- # '[' -z 111452 ']' 00:06:24.007 14:43:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.007 14:43:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.007 14:43:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.007 14:43:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.007 14:43:23 -- common/autotest_common.sh@10 -- # set +x 00:06:24.265 14:43:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.265 14:43:24 -- common/autotest_common.sh@850 -- # return 0 00:06:24.265 14:43:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.524 Malloc0 00:06:24.524 14:43:24 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.782 Malloc1 00:06:24.782 14:43:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.782 14:43:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.040 /dev/nbd0 00:06:25.040 14:43:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.040 14:43:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.040 14:43:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:25.040 14:43:25 -- common/autotest_common.sh@855 -- # local i 00:06:25.040 14:43:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:25.040 14:43:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:25.040 14:43:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:25.040 14:43:25 -- common/autotest_common.sh@859 -- # break 00:06:25.040 14:43:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:25.040 14:43:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:25.040 14:43:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.040 1+0 records in 00:06:25.040 1+0 records out 00:06:25.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147268 s, 27.8 MB/s 00:06:25.040 14:43:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:25.040 14:43:25 -- common/autotest_common.sh@872 -- # size=4096 00:06:25.040 14:43:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:25.040 14:43:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:25.040 14:43:25 -- common/autotest_common.sh@875 -- # return 0 00:06:25.040 14:43:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.040 14:43:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.040 14:43:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.298 /dev/nbd1 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.298 14:43:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:25.298 14:43:25 -- common/autotest_common.sh@855 -- # local i 00:06:25.298 14:43:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:25.298 14:43:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:25.298 14:43:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:25.298 14:43:25 -- common/autotest_common.sh@859 -- # break 00:06:25.298 14:43:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:25.298 14:43:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:25.298 14:43:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.298 1+0 records in 00:06:25.298 1+0 records out 00:06:25.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220047 s, 18.6 MB/s 00:06:25.298 14:43:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:25.298 14:43:25 -- common/autotest_common.sh@872 -- # size=4096 00:06:25.298 14:43:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:25.298 14:43:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:25.298 14:43:25 -- common/autotest_common.sh@875 -- # return 0 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.298 14:43:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.556 { 00:06:25.556 "nbd_device": "/dev/nbd0", 00:06:25.556 "bdev_name": "Malloc0" 00:06:25.556 }, 00:06:25.556 { 00:06:25.556 "nbd_device": "/dev/nbd1", 00:06:25.556 "bdev_name": "Malloc1" 00:06:25.556 } 00:06:25.556 ]' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.556 { 00:06:25.556 "nbd_device": "/dev/nbd0", 00:06:25.556 "bdev_name": "Malloc0" 00:06:25.556 }, 00:06:25.556 { 00:06:25.556 "nbd_device": "/dev/nbd1", 00:06:25.556 "bdev_name": "Malloc1" 00:06:25.556 } 00:06:25.556 ]' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.556 /dev/nbd1' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.556 /dev/nbd1' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.556 256+0 records in 00:06:25.556 256+0 records out 00:06:25.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519436 s, 202 MB/s 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.556 256+0 records in 00:06:25.556 256+0 records out 00:06:25.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312031 s, 33.6 MB/s 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.556 14:43:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.813 256+0 records in 00:06:25.813 256+0 records out 00:06:25.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033787 s, 31.0 MB/s 00:06:25.813 14:43:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.813 14:43:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.813 14:43:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.813 14:43:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.813 14:43:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.814 14:43:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@41 -- # break 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.071 14:43:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@41 -- # break 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.329 14:43:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@65 -- # true 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.587 14:43:26 -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.587 14:43:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.153 14:43:26 -- event/event.sh@35 -- # sleep 3 00:06:28.528 [2024-04-26 14:43:28.346090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.528 [2024-04-26 14:43:28.589364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.528 [2024-04-26 14:43:28.589372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.820 [2024-04-26 14:43:28.803009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.820 [2024-04-26 14:43:28.803102] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.195 14:43:29 -- event/event.sh@38 -- # waitforlisten 111452 /var/tmp/spdk-nbd.sock 00:06:30.195 14:43:29 -- common/autotest_common.sh@817 -- # '[' -z 111452 ']' 00:06:30.195 14:43:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.195 14:43:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.195 14:43:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.195 14:43:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.195 14:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.195 14:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.195 14:43:30 -- common/autotest_common.sh@850 -- # return 0 00:06:30.195 14:43:30 -- event/event.sh@39 -- # killprocess 111452 00:06:30.195 14:43:30 -- common/autotest_common.sh@936 -- # '[' -z 111452 ']' 00:06:30.195 14:43:30 -- common/autotest_common.sh@940 -- # kill -0 111452 00:06:30.195 14:43:30 -- common/autotest_common.sh@941 -- # uname 00:06:30.195 14:43:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.195 14:43:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111452 00:06:30.454 14:43:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.454 14:43:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.455 14:43:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111452' 00:06:30.455 killing process with pid 111452 00:06:30.455 14:43:30 -- common/autotest_common.sh@955 -- # kill 111452 00:06:30.455 14:43:30 -- common/autotest_common.sh@960 -- # wait 111452 00:06:31.830 spdk_app_start is called in Round 0. 00:06:31.830 Shutdown signal received, stop current app iteration 00:06:31.830 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:31.830 spdk_app_start is called in Round 1. 00:06:31.830 Shutdown signal received, stop current app iteration 00:06:31.830 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:31.830 spdk_app_start is called in Round 2. 00:06:31.830 Shutdown signal received, stop current app iteration 00:06:31.830 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:06:31.830 spdk_app_start is called in Round 3. 00:06:31.830 Shutdown signal received, stop current app iteration 00:06:31.830 14:43:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:31.830 14:43:31 -- event/event.sh@42 -- # return 0 00:06:31.830 00:06:31.830 real 0m20.585s 00:06:31.830 user 0m42.956s 00:06:31.830 sys 0m3.369s 00:06:31.830 14:43:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.830 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 ************************************ 00:06:31.830 END TEST app_repeat 00:06:31.830 ************************************ 00:06:31.830 14:43:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:31.830 14:43:31 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.830 14:43:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.830 14:43:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.830 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 ************************************ 00:06:31.830 START TEST cpu_locks 00:06:31.830 ************************************ 00:06:31.830 14:43:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.830 * Looking for test storage... 00:06:31.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:31.830 14:43:31 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.830 14:43:31 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.830 14:43:31 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.830 14:43:31 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.830 14:43:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.830 14:43:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.830 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 ************************************ 00:06:31.830 START TEST default_locks 00:06:31.830 ************************************ 00:06:31.830 14:43:31 -- common/autotest_common.sh@1111 -- # default_locks 00:06:31.830 14:43:31 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114094 00:06:31.830 14:43:31 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.830 14:43:31 -- event/cpu_locks.sh@47 -- # waitforlisten 114094 00:06:31.830 14:43:31 -- common/autotest_common.sh@817 -- # '[' -z 114094 ']' 00:06:31.830 14:43:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.830 14:43:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.830 14:43:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.830 14:43:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.830 14:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.830 [2024-04-26 14:43:31.873946] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:31.830 [2024-04-26 14:43:31.874076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114094 ] 00:06:32.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.089 [2024-04-26 14:43:32.004772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.347 [2024-04-26 14:43:32.252700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.282 14:43:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.282 14:43:33 -- common/autotest_common.sh@850 -- # return 0 00:06:33.282 14:43:33 -- event/cpu_locks.sh@49 -- # locks_exist 114094 00:06:33.282 14:43:33 -- event/cpu_locks.sh@22 -- # lslocks -p 114094 00:06:33.282 14:43:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.282 lslocks: write error 00:06:33.282 14:43:33 -- event/cpu_locks.sh@50 -- # killprocess 114094 00:06:33.282 14:43:33 -- common/autotest_common.sh@936 -- # '[' -z 114094 ']' 00:06:33.282 14:43:33 -- common/autotest_common.sh@940 -- # kill -0 114094 00:06:33.282 14:43:33 -- common/autotest_common.sh@941 -- # uname 00:06:33.282 14:43:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.282 14:43:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114094 00:06:33.540 14:43:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.540 14:43:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.540 14:43:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114094' 00:06:33.540 killing process with pid 114094 00:06:33.540 14:43:33 -- common/autotest_common.sh@955 -- # kill 114094 00:06:33.540 14:43:33 -- common/autotest_common.sh@960 -- # wait 114094 00:06:36.097 14:43:35 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114094 00:06:36.097 14:43:35 -- common/autotest_common.sh@638 -- # local es=0 00:06:36.097 14:43:35 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 114094 00:06:36.097 14:43:35 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:36.097 14:43:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:36.097 14:43:35 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:36.097 14:43:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:36.097 14:43:35 -- common/autotest_common.sh@641 -- # waitforlisten 114094 00:06:36.097 14:43:35 -- common/autotest_common.sh@817 -- # '[' -z 114094 ']' 00:06:36.097 14:43:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.097 14:43:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.097 14:43:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.097 14:43:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.097 14:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:36.097 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (114094) - No such process 00:06:36.097 ERROR: process (pid: 114094) is no longer running 00:06:36.097 14:43:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.097 14:43:35 -- common/autotest_common.sh@850 -- # return 1 00:06:36.097 14:43:35 -- common/autotest_common.sh@641 -- # es=1 00:06:36.097 14:43:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:36.097 14:43:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:36.097 14:43:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:36.097 14:43:35 -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.097 14:43:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.097 14:43:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.097 14:43:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.097 00:06:36.097 real 0m4.051s 00:06:36.097 user 0m4.058s 00:06:36.097 sys 0m0.692s 00:06:36.097 14:43:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.097 14:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:36.097 ************************************ 00:06:36.097 END TEST default_locks 00:06:36.097 ************************************ 00:06:36.097 14:43:35 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.097 14:43:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.097 14:43:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.097 14:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:36.097 ************************************ 00:06:36.097 START TEST default_locks_via_rpc 00:06:36.097 ************************************ 00:06:36.097 14:43:35 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:36.097 14:43:35 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114653 00:06:36.097 14:43:35 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.097 14:43:35 -- event/cpu_locks.sh@63 -- # waitforlisten 114653 00:06:36.097 14:43:35 -- common/autotest_common.sh@817 -- # '[' -z 114653 ']' 00:06:36.097 14:43:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.097 14:43:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.097 14:43:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.097 14:43:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.097 14:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:36.097 [2024-04-26 14:43:36.054164] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:36.097 [2024-04-26 14:43:36.054303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114653 ] 00:06:36.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.097 [2024-04-26 14:43:36.175489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.356 [2024-04-26 14:43:36.422629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.291 14:43:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:37.291 14:43:37 -- common/autotest_common.sh@850 -- # return 0 00:06:37.291 14:43:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.291 14:43:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.291 14:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:37.291 14:43:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.291 14:43:37 -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.291 14:43:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.291 14:43:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.291 14:43:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.291 14:43:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.291 14:43:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.291 14:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:37.291 14:43:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.291 14:43:37 -- event/cpu_locks.sh@71 -- # locks_exist 114653 00:06:37.291 14:43:37 -- event/cpu_locks.sh@22 -- # lslocks -p 114653 00:06:37.291 14:43:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.549 14:43:37 -- event/cpu_locks.sh@73 -- # killprocess 114653 00:06:37.549 14:43:37 -- common/autotest_common.sh@936 -- # '[' -z 114653 ']' 00:06:37.549 14:43:37 -- common/autotest_common.sh@940 -- # kill -0 114653 00:06:37.549 14:43:37 -- common/autotest_common.sh@941 -- # uname 00:06:37.549 14:43:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.549 14:43:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114653 00:06:37.808 14:43:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.808 14:43:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.808 14:43:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114653' 00:06:37.808 killing process with pid 114653 00:06:37.808 14:43:37 -- common/autotest_common.sh@955 -- # kill 114653 00:06:37.808 14:43:37 -- common/autotest_common.sh@960 -- # wait 114653 00:06:40.339 00:06:40.339 real 0m4.134s 00:06:40.339 user 0m4.104s 00:06:40.339 sys 0m0.785s 00:06:40.339 14:43:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.339 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.339 ************************************ 00:06:40.339 END TEST default_locks_via_rpc 00:06:40.339 ************************************ 00:06:40.339 14:43:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.339 14:43:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.339 14:43:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.339 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.339 ************************************ 00:06:40.339 START TEST non_locking_app_on_locked_coremask 00:06:40.339 ************************************ 00:06:40.339 14:43:40 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:40.339 14:43:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=115217 00:06:40.339 14:43:40 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.339 14:43:40 -- event/cpu_locks.sh@81 -- # waitforlisten 115217 /var/tmp/spdk.sock 00:06:40.339 14:43:40 -- common/autotest_common.sh@817 -- # '[' -z 115217 ']' 00:06:40.339 14:43:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.339 14:43:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.339 14:43:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.339 14:43:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.339 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.339 [2024-04-26 14:43:40.307155] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:40.339 [2024-04-26 14:43:40.307309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115217 ] 00:06:40.339 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.597 [2024-04-26 14:43:40.441007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.856 [2024-04-26 14:43:40.689384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.791 14:43:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.791 14:43:41 -- common/autotest_common.sh@850 -- # return 0 00:06:41.791 14:43:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=115360 00:06:41.791 14:43:41 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:41.791 14:43:41 -- event/cpu_locks.sh@85 -- # waitforlisten 115360 /var/tmp/spdk2.sock 00:06:41.791 14:43:41 -- common/autotest_common.sh@817 -- # '[' -z 115360 ']' 00:06:41.791 14:43:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.791 14:43:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:41.791 14:43:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.791 14:43:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:41.791 14:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:41.791 [2024-04-26 14:43:41.620948] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:41.792 [2024-04-26 14:43:41.621085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115360 ] 00:06:41.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.792 [2024-04-26 14:43:41.808244] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.792 [2024-04-26 14:43:41.808309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.358 [2024-04-26 14:43:42.305822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.259 14:43:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.259 14:43:44 -- common/autotest_common.sh@850 -- # return 0 00:06:44.259 14:43:44 -- event/cpu_locks.sh@87 -- # locks_exist 115217 00:06:44.259 14:43:44 -- event/cpu_locks.sh@22 -- # lslocks -p 115217 00:06:44.259 14:43:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.825 lslocks: write error 00:06:44.825 14:43:44 -- event/cpu_locks.sh@89 -- # killprocess 115217 00:06:44.825 14:43:44 -- common/autotest_common.sh@936 -- # '[' -z 115217 ']' 00:06:44.825 14:43:44 -- common/autotest_common.sh@940 -- # kill -0 115217 00:06:44.825 14:43:44 -- common/autotest_common.sh@941 -- # uname 00:06:44.825 14:43:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.825 14:43:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115217 00:06:44.825 14:43:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.825 14:43:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.825 14:43:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115217' 00:06:44.825 killing process with pid 115217 00:06:44.825 14:43:44 -- common/autotest_common.sh@955 -- # kill 115217 00:06:44.825 14:43:44 -- common/autotest_common.sh@960 -- # wait 115217 00:06:50.092 14:43:49 -- event/cpu_locks.sh@90 -- # killprocess 115360 00:06:50.092 14:43:49 -- common/autotest_common.sh@936 -- # '[' -z 115360 ']' 00:06:50.092 14:43:49 -- common/autotest_common.sh@940 -- # kill -0 115360 00:06:50.092 14:43:49 -- common/autotest_common.sh@941 -- # uname 00:06:50.092 14:43:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.092 14:43:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115360 00:06:50.092 14:43:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.092 14:43:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.092 14:43:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115360' 00:06:50.092 killing process with pid 115360 00:06:50.092 14:43:49 -- common/autotest_common.sh@955 -- # kill 115360 00:06:50.092 14:43:49 -- common/autotest_common.sh@960 -- # wait 115360 00:06:52.737 00:06:52.737 real 0m11.875s 00:06:52.737 user 0m12.168s 00:06:52.737 sys 0m1.460s 00:06:52.737 14:43:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.737 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.737 ************************************ 00:06:52.737 END TEST non_locking_app_on_locked_coremask 00:06:52.737 ************************************ 00:06:52.737 14:43:52 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.738 14:43:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.738 14:43:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.738 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 ************************************ 00:06:52.738 START TEST locking_app_on_unlocked_coremask 00:06:52.738 ************************************ 00:06:52.738 14:43:52 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:52.738 14:43:52 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116604 00:06:52.738 14:43:52 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.738 14:43:52 -- event/cpu_locks.sh@99 -- # waitforlisten 116604 /var/tmp/spdk.sock 00:06:52.738 14:43:52 -- common/autotest_common.sh@817 -- # '[' -z 116604 ']' 00:06:52.738 14:43:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.738 14:43:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:52.738 14:43:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.738 14:43:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:52.738 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 [2024-04-26 14:43:52.298154] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:52.738 [2024-04-26 14:43:52.298295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116604 ] 00:06:52.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.738 [2024-04-26 14:43:52.429741] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.738 [2024-04-26 14:43:52.429798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.738 [2024-04-26 14:43:52.678348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.672 14:43:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:53.672 14:43:53 -- common/autotest_common.sh@850 -- # return 0 00:06:53.672 14:43:53 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116790 00:06:53.672 14:43:53 -- event/cpu_locks.sh@103 -- # waitforlisten 116790 /var/tmp/spdk2.sock 00:06:53.672 14:43:53 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.672 14:43:53 -- common/autotest_common.sh@817 -- # '[' -z 116790 ']' 00:06:53.672 14:43:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.672 14:43:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.672 14:43:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.672 14:43:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.672 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:06:53.672 [2024-04-26 14:43:53.647292] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:53.672 [2024-04-26 14:43:53.647423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116790 ] 00:06:53.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.930 [2024-04-26 14:43:53.846239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.496 [2024-04-26 14:43:54.326550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.399 14:43:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.399 14:43:56 -- common/autotest_common.sh@850 -- # return 0 00:06:56.399 14:43:56 -- event/cpu_locks.sh@105 -- # locks_exist 116790 00:06:56.399 14:43:56 -- event/cpu_locks.sh@22 -- # lslocks -p 116790 00:06:56.399 14:43:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.965 lslocks: write error 00:06:56.965 14:43:56 -- event/cpu_locks.sh@107 -- # killprocess 116604 00:06:56.965 14:43:56 -- common/autotest_common.sh@936 -- # '[' -z 116604 ']' 00:06:56.965 14:43:56 -- common/autotest_common.sh@940 -- # kill -0 116604 00:06:56.965 14:43:56 -- common/autotest_common.sh@941 -- # uname 00:06:56.965 14:43:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.965 14:43:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116604 00:06:56.965 14:43:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.965 14:43:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.965 14:43:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116604' 00:06:56.965 killing process with pid 116604 00:06:56.965 14:43:56 -- common/autotest_common.sh@955 -- # kill 116604 00:06:56.965 14:43:56 -- common/autotest_common.sh@960 -- # wait 116604 00:07:02.233 14:44:01 -- event/cpu_locks.sh@108 -- # killprocess 116790 00:07:02.233 14:44:01 -- common/autotest_common.sh@936 -- # '[' -z 116790 ']' 00:07:02.233 14:44:01 -- common/autotest_common.sh@940 -- # kill -0 116790 00:07:02.233 14:44:01 -- common/autotest_common.sh@941 -- # uname 00:07:02.233 14:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.233 14:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116790 00:07:02.233 14:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.233 14:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.233 14:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116790' 00:07:02.233 killing process with pid 116790 00:07:02.233 14:44:01 -- common/autotest_common.sh@955 -- # kill 116790 00:07:02.233 14:44:01 -- common/autotest_common.sh@960 -- # wait 116790 00:07:04.764 00:07:04.764 real 0m12.049s 00:07:04.764 user 0m12.350s 00:07:04.764 sys 0m1.511s 00:07:04.764 14:44:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.764 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.764 ************************************ 00:07:04.764 END TEST locking_app_on_unlocked_coremask 00:07:04.764 ************************************ 00:07:04.764 14:44:04 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.764 14:44:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.764 14:44:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.764 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.764 ************************************ 00:07:04.764 START TEST locking_app_on_locked_coremask 00:07:04.764 ************************************ 00:07:04.764 14:44:04 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:04.764 14:44:04 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=118103 00:07:04.764 14:44:04 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.764 14:44:04 -- event/cpu_locks.sh@116 -- # waitforlisten 118103 /var/tmp/spdk.sock 00:07:04.764 14:44:04 -- common/autotest_common.sh@817 -- # '[' -z 118103 ']' 00:07:04.764 14:44:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.764 14:44:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:04.765 14:44:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.765 14:44:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:04.765 14:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.765 [2024-04-26 14:44:04.480288] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:04.765 [2024-04-26 14:44:04.480414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118103 ] 00:07:04.765 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.765 [2024-04-26 14:44:04.608984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.024 [2024-04-26 14:44:04.860174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.961 14:44:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.961 14:44:05 -- common/autotest_common.sh@850 -- # return 0 00:07:05.961 14:44:05 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=118242 00:07:05.961 14:44:05 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 118242 /var/tmp/spdk2.sock 00:07:05.961 14:44:05 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.961 14:44:05 -- common/autotest_common.sh@638 -- # local es=0 00:07:05.961 14:44:05 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 118242 /var/tmp/spdk2.sock 00:07:05.961 14:44:05 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:05.961 14:44:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.961 14:44:05 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:05.961 14:44:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:05.961 14:44:05 -- common/autotest_common.sh@641 -- # waitforlisten 118242 /var/tmp/spdk2.sock 00:07:05.961 14:44:05 -- common/autotest_common.sh@817 -- # '[' -z 118242 ']' 00:07:05.961 14:44:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.961 14:44:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.961 14:44:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.961 14:44:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.961 14:44:05 -- common/autotest_common.sh@10 -- # set +x 00:07:05.961 [2024-04-26 14:44:05.816720] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:05.961 [2024-04-26 14:44:05.816854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118242 ] 00:07:05.961 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.961 [2024-04-26 14:44:06.005758] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 118103 has claimed it. 00:07:05.961 [2024-04-26 14:44:06.005852] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.528 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (118242) - No such process 00:07:06.528 ERROR: process (pid: 118242) is no longer running 00:07:06.528 14:44:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.528 14:44:06 -- common/autotest_common.sh@850 -- # return 1 00:07:06.528 14:44:06 -- common/autotest_common.sh@641 -- # es=1 00:07:06.528 14:44:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:06.528 14:44:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:06.528 14:44:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:06.528 14:44:06 -- event/cpu_locks.sh@122 -- # locks_exist 118103 00:07:06.528 14:44:06 -- event/cpu_locks.sh@22 -- # lslocks -p 118103 00:07:06.528 14:44:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.787 lslocks: write error 00:07:06.787 14:44:06 -- event/cpu_locks.sh@124 -- # killprocess 118103 00:07:06.787 14:44:06 -- common/autotest_common.sh@936 -- # '[' -z 118103 ']' 00:07:06.788 14:44:06 -- common/autotest_common.sh@940 -- # kill -0 118103 00:07:06.788 14:44:06 -- common/autotest_common.sh@941 -- # uname 00:07:06.788 14:44:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.788 14:44:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118103 00:07:06.788 14:44:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.788 14:44:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.788 14:44:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118103' 00:07:06.788 killing process with pid 118103 00:07:06.788 14:44:06 -- common/autotest_common.sh@955 -- # kill 118103 00:07:06.788 14:44:06 -- common/autotest_common.sh@960 -- # wait 118103 00:07:09.319 00:07:09.319 real 0m4.925s 00:07:09.319 user 0m5.091s 00:07:09.319 sys 0m0.975s 00:07:09.319 14:44:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.319 14:44:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.319 ************************************ 00:07:09.319 END TEST locking_app_on_locked_coremask 00:07:09.319 ************************************ 00:07:09.319 14:44:09 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.319 14:44:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.319 14:44:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.319 14:44:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.578 ************************************ 00:07:09.578 START TEST locking_overlapped_coremask 00:07:09.578 ************************************ 00:07:09.578 14:44:09 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:09.578 14:44:09 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=118803 00:07:09.578 14:44:09 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.578 14:44:09 -- event/cpu_locks.sh@133 -- # waitforlisten 118803 /var/tmp/spdk.sock 00:07:09.578 14:44:09 -- common/autotest_common.sh@817 -- # '[' -z 118803 ']' 00:07:09.578 14:44:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.578 14:44:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.578 14:44:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.578 14:44:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.578 14:44:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.578 [2024-04-26 14:44:09.521836] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:09.578 [2024-04-26 14:44:09.521978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118803 ] 00:07:09.578 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.578 [2024-04-26 14:44:09.650745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.836 [2024-04-26 14:44:09.902077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.836 [2024-04-26 14:44:09.902146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.836 [2024-04-26 14:44:09.902161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.770 14:44:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.770 14:44:10 -- common/autotest_common.sh@850 -- # return 0 00:07:10.770 14:44:10 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=118944 00:07:10.770 14:44:10 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:10.770 14:44:10 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 118944 /var/tmp/spdk2.sock 00:07:10.770 14:44:10 -- common/autotest_common.sh@638 -- # local es=0 00:07:10.770 14:44:10 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 118944 /var/tmp/spdk2.sock 00:07:10.770 14:44:10 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:10.770 14:44:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.770 14:44:10 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:10.770 14:44:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:10.770 14:44:10 -- common/autotest_common.sh@641 -- # waitforlisten 118944 /var/tmp/spdk2.sock 00:07:10.770 14:44:10 -- common/autotest_common.sh@817 -- # '[' -z 118944 ']' 00:07:10.771 14:44:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.771 14:44:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.771 14:44:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.771 14:44:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.771 14:44:10 -- common/autotest_common.sh@10 -- # set +x 00:07:10.771 [2024-04-26 14:44:10.782420] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:10.771 [2024-04-26 14:44:10.782574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118944 ] 00:07:11.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.030 [2024-04-26 14:44:10.953518] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118803 has claimed it. 00:07:11.030 [2024-04-26 14:44:10.953607] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.597 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (118944) - No such process 00:07:11.597 ERROR: process (pid: 118944) is no longer running 00:07:11.597 14:44:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.597 14:44:11 -- common/autotest_common.sh@850 -- # return 1 00:07:11.597 14:44:11 -- common/autotest_common.sh@641 -- # es=1 00:07:11.597 14:44:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.597 14:44:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.597 14:44:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.597 14:44:11 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:11.597 14:44:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.597 14:44:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.597 14:44:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.597 14:44:11 -- event/cpu_locks.sh@141 -- # killprocess 118803 00:07:11.597 14:44:11 -- common/autotest_common.sh@936 -- # '[' -z 118803 ']' 00:07:11.597 14:44:11 -- common/autotest_common.sh@940 -- # kill -0 118803 00:07:11.597 14:44:11 -- common/autotest_common.sh@941 -- # uname 00:07:11.597 14:44:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.597 14:44:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118803 00:07:11.597 14:44:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.597 14:44:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.597 14:44:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118803' 00:07:11.597 killing process with pid 118803 00:07:11.597 14:44:11 -- common/autotest_common.sh@955 -- # kill 118803 00:07:11.597 14:44:11 -- common/autotest_common.sh@960 -- # wait 118803 00:07:14.128 00:07:14.128 real 0m4.282s 00:07:14.128 user 0m11.066s 00:07:14.128 sys 0m0.764s 00:07:14.128 14:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.128 14:44:13 -- common/autotest_common.sh@10 -- # set +x 00:07:14.128 ************************************ 00:07:14.128 END TEST locking_overlapped_coremask 00:07:14.128 ************************************ 00:07:14.128 14:44:13 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.128 14:44:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.128 14:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.128 14:44:13 -- common/autotest_common.sh@10 -- # set +x 00:07:14.128 ************************************ 00:07:14.128 START TEST locking_overlapped_coremask_via_rpc 00:07:14.128 ************************************ 00:07:14.128 14:44:13 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:14.128 14:44:13 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=119305 00:07:14.128 14:44:13 -- event/cpu_locks.sh@149 -- # waitforlisten 119305 /var/tmp/spdk.sock 00:07:14.128 14:44:13 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.128 14:44:13 -- common/autotest_common.sh@817 -- # '[' -z 119305 ']' 00:07:14.128 14:44:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.128 14:44:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.128 14:44:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.128 14:44:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.128 14:44:13 -- common/autotest_common.sh@10 -- # set +x 00:07:14.128 [2024-04-26 14:44:13.942208] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:14.128 [2024-04-26 14:44:13.942343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119305 ] 00:07:14.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.128 [2024-04-26 14:44:14.072536] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.128 [2024-04-26 14:44:14.072600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.387 [2024-04-26 14:44:14.321841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.387 [2024-04-26 14:44:14.321906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.387 [2024-04-26 14:44:14.321897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.344 14:44:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.344 14:44:15 -- common/autotest_common.sh@850 -- # return 0 00:07:15.344 14:44:15 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=119515 00:07:15.344 14:44:15 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.344 14:44:15 -- event/cpu_locks.sh@153 -- # waitforlisten 119515 /var/tmp/spdk2.sock 00:07:15.344 14:44:15 -- common/autotest_common.sh@817 -- # '[' -z 119515 ']' 00:07:15.344 14:44:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.344 14:44:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.344 14:44:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.344 14:44:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.344 14:44:15 -- common/autotest_common.sh@10 -- # set +x 00:07:15.344 [2024-04-26 14:44:15.197891] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:15.344 [2024-04-26 14:44:15.198040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119515 ] 00:07:15.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.344 [2024-04-26 14:44:15.372326] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.344 [2024-04-26 14:44:15.372399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.913 [2024-04-26 14:44:15.822246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.913 [2024-04-26 14:44:15.825193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.913 [2024-04-26 14:44:15.825203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.814 14:44:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.814 14:44:17 -- common/autotest_common.sh@850 -- # return 0 00:07:17.814 14:44:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.814 14:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.814 14:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:17.814 14:44:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.814 14:44:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.814 14:44:17 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.814 14:44:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.814 14:44:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:17.814 14:44:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.814 14:44:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:17.814 14:44:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.814 14:44:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.814 14:44:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.814 14:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:17.814 [2024-04-26 14:44:17.893323] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 119305 has claimed it. 00:07:18.073 request: 00:07:18.073 { 00:07:18.073 "method": "framework_enable_cpumask_locks", 00:07:18.073 "req_id": 1 00:07:18.073 } 00:07:18.073 Got JSON-RPC error response 00:07:18.073 response: 00:07:18.073 { 00:07:18.073 "code": -32603, 00:07:18.073 "message": "Failed to claim CPU core: 2" 00:07:18.073 } 00:07:18.073 14:44:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:18.073 14:44:17 -- common/autotest_common.sh@641 -- # es=1 00:07:18.073 14:44:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:18.073 14:44:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:18.073 14:44:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:18.073 14:44:17 -- event/cpu_locks.sh@158 -- # waitforlisten 119305 /var/tmp/spdk.sock 00:07:18.073 14:44:17 -- common/autotest_common.sh@817 -- # '[' -z 119305 ']' 00:07:18.073 14:44:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.073 14:44:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.073 14:44:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.073 14:44:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.073 14:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:18.073 14:44:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.073 14:44:18 -- common/autotest_common.sh@850 -- # return 0 00:07:18.073 14:44:18 -- event/cpu_locks.sh@159 -- # waitforlisten 119515 /var/tmp/spdk2.sock 00:07:18.073 14:44:18 -- common/autotest_common.sh@817 -- # '[' -z 119515 ']' 00:07:18.073 14:44:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.073 14:44:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.073 14:44:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.073 14:44:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.073 14:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:18.331 14:44:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.331 14:44:18 -- common/autotest_common.sh@850 -- # return 0 00:07:18.331 14:44:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:18.331 14:44:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.331 14:44:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.331 14:44:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.331 00:07:18.331 real 0m4.535s 00:07:18.331 user 0m1.466s 00:07:18.331 sys 0m0.265s 00:07:18.331 14:44:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.331 14:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:18.331 ************************************ 00:07:18.331 END TEST locking_overlapped_coremask_via_rpc 00:07:18.331 ************************************ 00:07:18.331 14:44:18 -- event/cpu_locks.sh@174 -- # cleanup 00:07:18.331 14:44:18 -- event/cpu_locks.sh@15 -- # [[ -z 119305 ]] 00:07:18.331 14:44:18 -- event/cpu_locks.sh@15 -- # killprocess 119305 00:07:18.332 14:44:18 -- common/autotest_common.sh@936 -- # '[' -z 119305 ']' 00:07:18.332 14:44:18 -- common/autotest_common.sh@940 -- # kill -0 119305 00:07:18.332 14:44:18 -- common/autotest_common.sh@941 -- # uname 00:07:18.332 14:44:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.332 14:44:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119305 00:07:18.590 14:44:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.590 14:44:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.590 14:44:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119305' 00:07:18.590 killing process with pid 119305 00:07:18.590 14:44:18 -- common/autotest_common.sh@955 -- # kill 119305 00:07:18.590 14:44:18 -- common/autotest_common.sh@960 -- # wait 119305 00:07:21.120 14:44:20 -- event/cpu_locks.sh@16 -- # [[ -z 119515 ]] 00:07:21.120 14:44:20 -- event/cpu_locks.sh@16 -- # killprocess 119515 00:07:21.120 14:44:20 -- common/autotest_common.sh@936 -- # '[' -z 119515 ']' 00:07:21.120 14:44:20 -- common/autotest_common.sh@940 -- # kill -0 119515 00:07:21.120 14:44:20 -- common/autotest_common.sh@941 -- # uname 00:07:21.120 14:44:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.120 14:44:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119515 00:07:21.120 14:44:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:21.120 14:44:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:21.120 14:44:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119515' 00:07:21.120 killing process with pid 119515 00:07:21.120 14:44:20 -- common/autotest_common.sh@955 -- # kill 119515 00:07:21.120 14:44:20 -- common/autotest_common.sh@960 -- # wait 119515 00:07:23.024 14:44:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.024 14:44:22 -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.024 14:44:22 -- event/cpu_locks.sh@15 -- # [[ -z 119305 ]] 00:07:23.024 14:44:22 -- event/cpu_locks.sh@15 -- # killprocess 119305 00:07:23.024 14:44:22 -- common/autotest_common.sh@936 -- # '[' -z 119305 ']' 00:07:23.024 14:44:22 -- common/autotest_common.sh@940 -- # kill -0 119305 00:07:23.024 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (119305) - No such process 00:07:23.024 14:44:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 119305 is not found' 00:07:23.024 Process with pid 119305 is not found 00:07:23.024 14:44:22 -- event/cpu_locks.sh@16 -- # [[ -z 119515 ]] 00:07:23.024 14:44:22 -- event/cpu_locks.sh@16 -- # killprocess 119515 00:07:23.024 14:44:22 -- common/autotest_common.sh@936 -- # '[' -z 119515 ']' 00:07:23.024 14:44:22 -- common/autotest_common.sh@940 -- # kill -0 119515 00:07:23.024 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (119515) - No such process 00:07:23.024 14:44:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 119515 is not found' 00:07:23.024 Process with pid 119515 is not found 00:07:23.024 14:44:22 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.024 00:07:23.024 real 0m51.306s 00:07:23.024 user 1m24.719s 00:07:23.024 sys 0m7.932s 00:07:23.024 14:44:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.024 14:44:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.024 ************************************ 00:07:23.024 END TEST cpu_locks 00:07:23.024 ************************************ 00:07:23.024 00:07:23.024 real 1m21.113s 00:07:23.024 user 2m21.703s 00:07:23.024 sys 0m12.709s 00:07:23.025 14:44:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.025 14:44:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.025 ************************************ 00:07:23.025 END TEST event 00:07:23.025 ************************************ 00:07:23.025 14:44:22 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:23.025 14:44:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.025 14:44:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.025 14:44:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.025 ************************************ 00:07:23.025 START TEST thread 00:07:23.025 ************************************ 00:07:23.025 14:44:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:23.285 * Looking for test storage... 00:07:23.285 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:23.285 14:44:23 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.285 14:44:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:23.285 14:44:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.285 14:44:23 -- common/autotest_common.sh@10 -- # set +x 00:07:23.285 ************************************ 00:07:23.285 START TEST thread_poller_perf 00:07:23.285 ************************************ 00:07:23.285 14:44:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.285 [2024-04-26 14:44:23.272977] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:23.285 [2024-04-26 14:44:23.273106] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120561 ] 00:07:23.285 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.544 [2024-04-26 14:44:23.402029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.802 [2024-04-26 14:44:23.652889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.802 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.176 ====================================== 00:07:25.176 busy:2717893554 (cyc) 00:07:25.176 total_run_count: 291000 00:07:25.176 tsc_hz: 2700000000 (cyc) 00:07:25.176 ====================================== 00:07:25.176 poller_cost: 9339 (cyc), 3458 (nsec) 00:07:25.176 00:07:25.176 real 0m1.843s 00:07:25.176 user 0m1.668s 00:07:25.176 sys 0m0.163s 00:07:25.176 14:44:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.176 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 ************************************ 00:07:25.176 END TEST thread_poller_perf 00:07:25.176 ************************************ 00:07:25.176 14:44:25 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.176 14:44:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:25.176 14:44:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.176 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.176 ************************************ 00:07:25.176 START TEST thread_poller_perf 00:07:25.176 ************************************ 00:07:25.176 14:44:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.176 [2024-04-26 14:44:25.232998] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:25.176 [2024-04-26 14:44:25.233124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120825 ] 00:07:25.434 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.434 [2024-04-26 14:44:25.363322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.693 [2024-04-26 14:44:25.611137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.693 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.070 ====================================== 00:07:27.070 busy:2705135446 (cyc) 00:07:27.070 total_run_count: 3631000 00:07:27.070 tsc_hz: 2700000000 (cyc) 00:07:27.070 ====================================== 00:07:27.070 poller_cost: 745 (cyc), 275 (nsec) 00:07:27.070 00:07:27.070 real 0m1.829s 00:07:27.070 user 0m1.675s 00:07:27.070 sys 0m0.144s 00:07:27.070 14:44:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.070 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:07:27.070 ************************************ 00:07:27.070 END TEST thread_poller_perf 00:07:27.070 ************************************ 00:07:27.070 14:44:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.070 00:07:27.070 real 0m3.966s 00:07:27.070 user 0m3.440s 00:07:27.070 sys 0m0.492s 00:07:27.070 14:44:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.070 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:07:27.070 ************************************ 00:07:27.070 END TEST thread 00:07:27.070 ************************************ 00:07:27.070 14:44:27 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:27.070 14:44:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.070 14:44:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.070 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:07:27.330 ************************************ 00:07:27.330 START TEST accel 00:07:27.330 ************************************ 00:07:27.330 14:44:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:27.330 * Looking for test storage... 00:07:27.330 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:27.330 14:44:27 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:27.330 14:44:27 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:27.330 14:44:27 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.330 14:44:27 -- accel/accel.sh@62 -- # spdk_tgt_pid=121064 00:07:27.330 14:44:27 -- accel/accel.sh@63 -- # waitforlisten 121064 00:07:27.330 14:44:27 -- common/autotest_common.sh@817 -- # '[' -z 121064 ']' 00:07:27.330 14:44:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.330 14:44:27 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:27.330 14:44:27 -- accel/accel.sh@61 -- # build_accel_config 00:07:27.330 14:44:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:27.330 14:44:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.330 14:44:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.330 14:44:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.330 14:44:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:27.330 14:44:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.330 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:07:27.330 14:44:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.330 14:44:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.330 14:44:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.330 14:44:27 -- accel/accel.sh@41 -- # jq -r . 00:07:27.330 [2024-04-26 14:44:27.326847] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:27.330 [2024-04-26 14:44:27.326977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121064 ] 00:07:27.330 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.590 [2024-04-26 14:44:27.457489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.848 [2024-04-26 14:44:27.706607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.785 14:44:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:28.785 14:44:28 -- common/autotest_common.sh@850 -- # return 0 00:07:28.785 14:44:28 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:28.785 14:44:28 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:28.785 14:44:28 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:28.785 14:44:28 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:28.785 14:44:28 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:28.785 14:44:28 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:28.785 14:44:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.785 14:44:28 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:28.785 14:44:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.785 14:44:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # IFS== 00:07:28.785 14:44:28 -- accel/accel.sh@72 -- # read -r opc module 00:07:28.785 14:44:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:28.785 14:44:28 -- accel/accel.sh@75 -- # killprocess 121064 00:07:28.785 14:44:28 -- common/autotest_common.sh@936 -- # '[' -z 121064 ']' 00:07:28.785 14:44:28 -- common/autotest_common.sh@940 -- # kill -0 121064 00:07:28.785 14:44:28 -- common/autotest_common.sh@941 -- # uname 00:07:28.785 14:44:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.785 14:44:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121064 00:07:28.785 14:44:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.785 14:44:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.785 14:44:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121064' 00:07:28.785 killing process with pid 121064 00:07:28.785 14:44:28 -- common/autotest_common.sh@955 -- # kill 121064 00:07:28.785 14:44:28 -- common/autotest_common.sh@960 -- # wait 121064 00:07:31.320 14:44:31 -- accel/accel.sh@76 -- # trap - ERR 00:07:31.320 14:44:31 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:31.320 14:44:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.320 14:44:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.320 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:31.320 14:44:31 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:31.320 14:44:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:31.320 14:44:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.320 14:44:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.320 14:44:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.320 14:44:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.320 14:44:31 -- accel/accel.sh@41 -- # jq -r . 00:07:31.320 14:44:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.320 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:31.320 14:44:31 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:31.320 14:44:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:31.320 14:44:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.320 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:31.320 ************************************ 00:07:31.320 START TEST accel_missing_filename 00:07:31.320 ************************************ 00:07:31.320 14:44:31 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:31.320 14:44:31 -- common/autotest_common.sh@638 -- # local es=0 00:07:31.320 14:44:31 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:31.320 14:44:31 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:31.320 14:44:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.320 14:44:31 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:31.320 14:44:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.320 14:44:31 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:31.320 14:44:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:31.320 14:44:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.320 14:44:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.320 14:44:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.320 14:44:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.320 14:44:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.320 14:44:31 -- accel/accel.sh@41 -- # jq -r . 00:07:31.580 [2024-04-26 14:44:31.427810] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:31.580 [2024-04-26 14:44:31.427932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121644 ] 00:07:31.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.580 [2024-04-26 14:44:31.554920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.840 [2024-04-26 14:44:31.805388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.098 [2024-04-26 14:44:32.032926] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.665 [2024-04-26 14:44:32.585220] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:32.925 A filename is required. 00:07:33.184 14:44:33 -- common/autotest_common.sh@641 -- # es=234 00:07:33.184 14:44:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:33.184 14:44:33 -- common/autotest_common.sh@650 -- # es=106 00:07:33.184 14:44:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:33.184 14:44:33 -- common/autotest_common.sh@658 -- # es=1 00:07:33.184 14:44:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:33.184 00:07:33.184 real 0m1.628s 00:07:33.184 user 0m1.407s 00:07:33.184 sys 0m0.249s 00:07:33.184 14:44:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.184 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:33.184 ************************************ 00:07:33.184 END TEST accel_missing_filename 00:07:33.184 ************************************ 00:07:33.184 14:44:33 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:33.184 14:44:33 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:33.184 14:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.184 14:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:33.184 ************************************ 00:07:33.184 START TEST accel_compress_verify 00:07:33.184 ************************************ 00:07:33.184 14:44:33 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:33.184 14:44:33 -- common/autotest_common.sh@638 -- # local es=0 00:07:33.184 14:44:33 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:33.184 14:44:33 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:33.184 14:44:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.184 14:44:33 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:33.184 14:44:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.184 14:44:33 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:33.184 14:44:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:33.184 14:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.184 14:44:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.184 14:44:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.184 14:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.184 14:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.184 14:44:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.184 14:44:33 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.184 14:44:33 -- accel/accel.sh@41 -- # jq -r . 00:07:33.184 [2024-04-26 14:44:33.184258] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:33.184 [2024-04-26 14:44:33.184373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121925 ] 00:07:33.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.444 [2024-04-26 14:44:33.311654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.702 [2024-04-26 14:44:33.540854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.702 [2024-04-26 14:44:33.765306] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.270 [2024-04-26 14:44:34.323250] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:34.846 00:07:34.846 Compression does not support the verify option, aborting. 00:07:34.846 14:44:34 -- common/autotest_common.sh@641 -- # es=161 00:07:34.846 14:44:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.846 14:44:34 -- common/autotest_common.sh@650 -- # es=33 00:07:34.846 14:44:34 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:34.846 14:44:34 -- common/autotest_common.sh@658 -- # es=1 00:07:34.846 14:44:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.846 00:07:34.846 real 0m1.599s 00:07:34.846 user 0m1.396s 00:07:34.846 sys 0m0.231s 00:07:34.846 14:44:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.846 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:07:34.846 ************************************ 00:07:34.846 END TEST accel_compress_verify 00:07:34.846 ************************************ 00:07:34.846 14:44:34 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:34.846 14:44:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:34.846 14:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.846 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:07:34.846 ************************************ 00:07:34.846 START TEST accel_wrong_workload 00:07:34.846 ************************************ 00:07:34.846 14:44:34 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:34.846 14:44:34 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.846 14:44:34 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:34.846 14:44:34 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:34.846 14:44:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.846 14:44:34 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:34.846 14:44:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.846 14:44:34 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:34.846 14:44:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:34.846 14:44:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.846 14:44:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.846 14:44:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.846 14:44:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.846 14:44:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.846 14:44:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.846 14:44:34 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.846 14:44:34 -- accel/accel.sh@41 -- # jq -r . 00:07:34.846 Unsupported workload type: foobar 00:07:34.846 [2024-04-26 14:44:34.889878] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:34.846 accel_perf options: 00:07:34.846 [-h help message] 00:07:34.846 [-q queue depth per core] 00:07:34.846 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:34.846 [-T number of threads per core 00:07:34.846 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:34.846 [-t time in seconds] 00:07:34.846 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:34.846 [ dif_verify, , dif_generate, dif_generate_copy 00:07:34.846 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:34.846 [-l for compress/decompress workloads, name of uncompressed input file 00:07:34.846 [-S for crc32c workload, use this seed value (default 0) 00:07:34.846 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:34.846 [-f for fill workload, use this BYTE value (default 255) 00:07:34.846 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:34.846 [-y verify result if this switch is on] 00:07:34.846 [-a tasks to allocate per core (default: same value as -q)] 00:07:34.846 Can be used to spread operations across a wider range of memory. 00:07:34.846 14:44:34 -- common/autotest_common.sh@641 -- # es=1 00:07:34.846 14:44:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.846 14:44:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.846 14:44:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.846 00:07:34.846 real 0m0.054s 00:07:34.846 user 0m0.059s 00:07:34.846 sys 0m0.031s 00:07:34.846 14:44:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.846 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:07:34.846 ************************************ 00:07:34.846 END TEST accel_wrong_workload 00:07:34.846 ************************************ 00:07:35.106 14:44:34 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.106 14:44:34 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:35.106 14:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.106 14:44:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.106 ************************************ 00:07:35.106 START TEST accel_negative_buffers 00:07:35.107 ************************************ 00:07:35.107 14:44:35 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:35.107 14:44:35 -- common/autotest_common.sh@638 -- # local es=0 00:07:35.107 14:44:35 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:35.107 14:44:35 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:35.107 14:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.107 14:44:35 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:35.107 14:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:35.107 14:44:35 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:35.107 14:44:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:35.107 14:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.107 14:44:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.107 14:44:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.107 14:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.107 14:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.107 14:44:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.107 14:44:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.107 14:44:35 -- accel/accel.sh@41 -- # jq -r . 00:07:35.107 -x option must be non-negative. 00:07:35.107 [2024-04-26 14:44:35.076420] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:35.107 accel_perf options: 00:07:35.107 [-h help message] 00:07:35.107 [-q queue depth per core] 00:07:35.107 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:35.107 [-T number of threads per core 00:07:35.107 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:35.107 [-t time in seconds] 00:07:35.107 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:35.107 [ dif_verify, , dif_generate, dif_generate_copy 00:07:35.107 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:35.107 [-l for compress/decompress workloads, name of uncompressed input file 00:07:35.107 [-S for crc32c workload, use this seed value (default 0) 00:07:35.107 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:35.107 [-f for fill workload, use this BYTE value (default 255) 00:07:35.107 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:35.107 [-y verify result if this switch is on] 00:07:35.107 [-a tasks to allocate per core (default: same value as -q)] 00:07:35.107 Can be used to spread operations across a wider range of memory. 00:07:35.107 14:44:35 -- common/autotest_common.sh@641 -- # es=1 00:07:35.107 14:44:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:35.107 14:44:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:35.107 14:44:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:35.107 00:07:35.107 real 0m0.058s 00:07:35.107 user 0m0.066s 00:07:35.107 sys 0m0.029s 00:07:35.107 14:44:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.107 14:44:35 -- common/autotest_common.sh@10 -- # set +x 00:07:35.107 ************************************ 00:07:35.107 END TEST accel_negative_buffers 00:07:35.107 ************************************ 00:07:35.107 14:44:35 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:35.107 14:44:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:35.107 14:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.107 14:44:35 -- common/autotest_common.sh@10 -- # set +x 00:07:35.366 ************************************ 00:07:35.366 START TEST accel_crc32c 00:07:35.366 ************************************ 00:07:35.366 14:44:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:35.366 14:44:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.366 14:44:35 -- accel/accel.sh@17 -- # local accel_module 00:07:35.366 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.366 14:44:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:35.366 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.366 14:44:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:35.366 14:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.366 14:44:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.366 14:44:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.366 14:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.366 14:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.366 14:44:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.366 14:44:35 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.366 14:44:35 -- accel/accel.sh@41 -- # jq -r . 00:07:35.366 [2024-04-26 14:44:35.260836] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:35.366 [2024-04-26 14:44:35.260950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122156 ] 00:07:35.366 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.366 [2024-04-26 14:44:35.394247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.625 [2024-04-26 14:44:35.644450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=0x1 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=crc32c 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=32 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=software 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=32 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=32 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=1 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val=Yes 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 14:44:35 -- accel/accel.sh@20 -- # val= 00:07:35.884 14:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 14:44:35 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@20 -- # val= 00:07:37.789 14:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:37.789 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:37.789 14:44:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.789 14:44:37 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:37.789 14:44:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.789 00:07:37.789 real 0m2.619s 00:07:37.789 user 0m2.363s 00:07:37.789 sys 0m0.251s 00:07:37.789 14:44:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.789 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.789 ************************************ 00:07:37.789 END TEST accel_crc32c 00:07:37.789 ************************************ 00:07:37.789 14:44:37 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:37.789 14:44:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:37.789 14:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.789 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:38.048 ************************************ 00:07:38.048 START TEST accel_crc32c_C2 00:07:38.048 ************************************ 00:07:38.048 14:44:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:38.049 14:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.049 14:44:37 -- accel/accel.sh@17 -- # local accel_module 00:07:38.049 14:44:37 -- accel/accel.sh@19 -- # IFS=: 00:07:38.049 14:44:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:38.049 14:44:37 -- accel/accel.sh@19 -- # read -r var val 00:07:38.049 14:44:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:38.049 14:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.049 14:44:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.049 14:44:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.049 14:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.049 14:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.049 14:44:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.049 14:44:37 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.049 14:44:37 -- accel/accel.sh@41 -- # jq -r . 00:07:38.049 [2024-04-26 14:44:38.012818] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:38.049 [2024-04-26 14:44:38.012933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122562 ] 00:07:38.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.308 [2024-04-26 14:44:38.137030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.308 [2024-04-26 14:44:38.366661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=0x1 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=crc32c 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=0 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=software 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@22 -- # accel_module=software 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=32 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=32 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=1 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val=Yes 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:38.567 14:44:38 -- accel/accel.sh@20 -- # val= 00:07:38.567 14:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # IFS=: 00:07:38.567 14:44:38 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@20 -- # val= 00:07:41.102 14:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.102 14:44:40 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.102 14:44:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.102 00:07:41.102 real 0m2.610s 00:07:41.102 user 0m2.367s 00:07:41.102 sys 0m0.240s 00:07:41.102 14:44:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.102 14:44:40 -- common/autotest_common.sh@10 -- # set +x 00:07:41.102 ************************************ 00:07:41.102 END TEST accel_crc32c_C2 00:07:41.102 ************************************ 00:07:41.102 14:44:40 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:41.102 14:44:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:41.102 14:44:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.102 14:44:40 -- common/autotest_common.sh@10 -- # set +x 00:07:41.102 ************************************ 00:07:41.102 START TEST accel_copy 00:07:41.102 ************************************ 00:07:41.102 14:44:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:41.102 14:44:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.102 14:44:40 -- accel/accel.sh@17 -- # local accel_module 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # IFS=: 00:07:41.102 14:44:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:41.102 14:44:40 -- accel/accel.sh@19 -- # read -r var val 00:07:41.102 14:44:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:41.102 14:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.102 14:44:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.102 14:44:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.102 14:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.102 14:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.102 14:44:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.102 14:44:40 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.102 14:44:40 -- accel/accel.sh@41 -- # jq -r . 00:07:41.102 [2024-04-26 14:44:40.737154] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:41.102 [2024-04-26 14:44:40.737300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122869 ] 00:07:41.102 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.102 [2024-04-26 14:44:40.861891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.102 [2024-04-26 14:44:41.110759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=0x1 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=copy 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=software 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=32 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=32 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=1 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.362 14:44:41 -- accel/accel.sh@20 -- # val=Yes 00:07:41.362 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.362 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.363 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.363 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.363 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.363 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:41.363 14:44:41 -- accel/accel.sh@20 -- # val= 00:07:41.363 14:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.363 14:44:41 -- accel/accel.sh@19 -- # IFS=: 00:07:41.363 14:44:41 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@20 -- # val= 00:07:43.267 14:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.267 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.267 14:44:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.267 14:44:43 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:43.267 14:44:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.267 00:07:43.267 real 0m2.616s 00:07:43.267 user 0m2.378s 00:07:43.267 sys 0m0.235s 00:07:43.267 14:44:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.267 14:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:43.267 ************************************ 00:07:43.267 END TEST accel_copy 00:07:43.267 ************************************ 00:07:43.267 14:44:43 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.267 14:44:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.267 14:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.267 14:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:43.526 ************************************ 00:07:43.526 START TEST accel_fill 00:07:43.526 ************************************ 00:07:43.526 14:44:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.526 14:44:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.526 14:44:43 -- accel/accel.sh@17 -- # local accel_module 00:07:43.526 14:44:43 -- accel/accel.sh@19 -- # IFS=: 00:07:43.526 14:44:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.526 14:44:43 -- accel/accel.sh@19 -- # read -r var val 00:07:43.526 14:44:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.526 14:44:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.526 14:44:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.526 14:44:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.526 14:44:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.526 14:44:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.526 14:44:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.526 14:44:43 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.526 14:44:43 -- accel/accel.sh@41 -- # jq -r . 00:07:43.526 [2024-04-26 14:44:43.472461] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:43.526 [2024-04-26 14:44:43.472585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123282 ] 00:07:43.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.526 [2024-04-26 14:44:43.599441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.785 [2024-04-26 14:44:43.848501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.044 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.044 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.044 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.044 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.044 14:44:44 -- accel/accel.sh@20 -- # val=0x1 00:07:44.044 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.044 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.044 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.044 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=fill 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=0x80 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=software 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@22 -- # accel_module=software 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=64 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=64 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=1 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val=Yes 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:44.045 14:44:44 -- accel/accel.sh@20 -- # val= 00:07:44.045 14:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # IFS=: 00:07:44.045 14:44:44 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.577 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.577 14:44:46 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:46.577 14:44:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.577 00:07:46.577 real 0m2.616s 00:07:46.577 user 0m2.377s 00:07:46.577 sys 0m0.236s 00:07:46.577 14:44:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.577 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:46.577 ************************************ 00:07:46.577 END TEST accel_fill 00:07:46.577 ************************************ 00:07:46.577 14:44:46 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:46.577 14:44:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:46.577 14:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.577 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:46.577 ************************************ 00:07:46.577 START TEST accel_copy_crc32c 00:07:46.577 ************************************ 00:07:46.577 14:44:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:46.577 14:44:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.577 14:44:46 -- accel/accel.sh@17 -- # local accel_module 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.577 14:44:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:46.577 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.577 14:44:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:46.577 14:44:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.577 14:44:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.577 14:44:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.577 14:44:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.577 14:44:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.577 14:44:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.577 14:44:46 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.577 14:44:46 -- accel/accel.sh@41 -- # jq -r . 00:07:46.577 [2024-04-26 14:44:46.217808] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:46.577 [2024-04-26 14:44:46.217921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123582 ] 00:07:46.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.577 [2024-04-26 14:44:46.348906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.577 [2024-04-26 14:44:46.598730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=0x1 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=0 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=software 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=32 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=32 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=1 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val=Yes 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:46.835 14:44:46 -- accel/accel.sh@20 -- # val= 00:07:46.835 14:44:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # IFS=: 00:07:46.835 14:44:46 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@20 -- # val= 00:07:48.734 14:44:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.734 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.734 14:44:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.734 14:44:48 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:48.734 14:44:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.734 00:07:48.734 real 0m2.631s 00:07:48.734 user 0m2.371s 00:07:48.734 sys 0m0.258s 00:07:48.734 14:44:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.734 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 ************************************ 00:07:48.734 END TEST accel_copy_crc32c 00:07:48.734 ************************************ 00:07:48.992 14:44:48 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:48.992 14:44:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:48.992 14:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.992 14:44:48 -- common/autotest_common.sh@10 -- # set +x 00:07:48.992 ************************************ 00:07:48.992 START TEST accel_copy_crc32c_C2 00:07:48.992 ************************************ 00:07:48.992 14:44:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:48.992 14:44:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.992 14:44:48 -- accel/accel.sh@17 -- # local accel_module 00:07:48.992 14:44:48 -- accel/accel.sh@19 -- # IFS=: 00:07:48.992 14:44:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:48.992 14:44:48 -- accel/accel.sh@19 -- # read -r var val 00:07:48.992 14:44:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:48.992 14:44:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.992 14:44:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.992 14:44:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.992 14:44:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.992 14:44:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.992 14:44:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.992 14:44:48 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.992 14:44:48 -- accel/accel.sh@41 -- # jq -r . 00:07:48.992 [2024-04-26 14:44:48.985958] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:48.992 [2024-04-26 14:44:48.986074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123920 ] 00:07:48.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.250 [2024-04-26 14:44:49.115430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.509 [2024-04-26 14:44:49.353316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=0x1 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=0 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=software 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=32 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=32 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=1 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val=Yes 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:49.509 14:44:49 -- accel/accel.sh@20 -- # val= 00:07:49.509 14:44:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # IFS=: 00:07:49.509 14:44:49 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@20 -- # val= 00:07:52.040 14:44:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.040 14:44:51 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:52.040 14:44:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.040 00:07:52.040 real 0m2.614s 00:07:52.040 user 0m2.362s 00:07:52.040 sys 0m0.248s 00:07:52.040 14:44:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:52.040 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.040 ************************************ 00:07:52.040 END TEST accel_copy_crc32c_C2 00:07:52.040 ************************************ 00:07:52.040 14:44:51 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:52.040 14:44:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:52.040 14:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.040 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.040 ************************************ 00:07:52.040 START TEST accel_dualcast 00:07:52.040 ************************************ 00:07:52.040 14:44:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:52.040 14:44:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.040 14:44:51 -- accel/accel.sh@17 -- # local accel_module 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # IFS=: 00:07:52.040 14:44:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:52.040 14:44:51 -- accel/accel.sh@19 -- # read -r var val 00:07:52.040 14:44:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:52.040 14:44:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.040 14:44:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.040 14:44:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.040 14:44:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.040 14:44:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.040 14:44:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.040 14:44:51 -- accel/accel.sh@40 -- # local IFS=, 00:07:52.040 14:44:51 -- accel/accel.sh@41 -- # jq -r . 00:07:52.040 [2024-04-26 14:44:51.717716] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:52.040 [2024-04-26 14:44:51.717830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124292 ] 00:07:52.040 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.040 [2024-04-26 14:44:51.847871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.040 [2024-04-26 14:44:52.096428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=0x1 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=dualcast 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=software 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=32 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=32 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=1 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val=Yes 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:52.299 14:44:52 -- accel/accel.sh@20 -- # val= 00:07:52.299 14:44:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # IFS=: 00:07:52.299 14:44:52 -- accel/accel.sh@19 -- # read -r var val 00:07:54.203 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.203 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.203 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.203 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.203 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.203 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.204 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.204 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.204 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.204 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.204 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.204 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.204 14:44:54 -- accel/accel.sh@20 -- # val= 00:07:54.204 14:44:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.204 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.463 14:44:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.463 14:44:54 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:54.463 14:44:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.463 00:07:54.463 real 0m2.609s 00:07:54.463 user 0m2.369s 00:07:54.463 sys 0m0.237s 00:07:54.463 14:44:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.463 14:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.463 ************************************ 00:07:54.463 END TEST accel_dualcast 00:07:54.463 ************************************ 00:07:54.463 14:44:54 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:54.463 14:44:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:54.463 14:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.463 14:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.463 ************************************ 00:07:54.463 START TEST accel_compare 00:07:54.463 ************************************ 00:07:54.463 14:44:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:54.463 14:44:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.463 14:44:54 -- accel/accel.sh@17 -- # local accel_module 00:07:54.463 14:44:54 -- accel/accel.sh@19 -- # IFS=: 00:07:54.463 14:44:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:54.463 14:44:54 -- accel/accel.sh@19 -- # read -r var val 00:07:54.463 14:44:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:54.463 14:44:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.463 14:44:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.463 14:44:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.463 14:44:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.463 14:44:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.463 14:44:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.463 14:44:54 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.463 14:44:54 -- accel/accel.sh@41 -- # jq -r . 00:07:54.463 [2024-04-26 14:44:54.440101] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:54.463 [2024-04-26 14:44:54.440259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124596 ] 00:07:54.463 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.722 [2024-04-26 14:44:54.568289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.984 [2024-04-26 14:44:54.818397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=0x1 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=compare 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=software 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@22 -- # accel_module=software 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=32 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=32 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=1 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val=Yes 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:54.984 14:44:55 -- accel/accel.sh@20 -- # val= 00:07:54.984 14:44:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # IFS=: 00:07:54.984 14:44:55 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:56 -- accel/accel.sh@20 -- # val= 00:07:57.516 14:44:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:56 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.516 14:44:57 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:57.516 14:44:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.516 00:07:57.516 real 0m2.602s 00:07:57.516 user 0m2.374s 00:07:57.516 sys 0m0.223s 00:07:57.516 14:44:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.516 14:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:57.516 ************************************ 00:07:57.516 END TEST accel_compare 00:07:57.516 ************************************ 00:07:57.516 14:44:57 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:57.516 14:44:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:57.516 14:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.516 14:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:57.516 ************************************ 00:07:57.516 START TEST accel_xor 00:07:57.516 ************************************ 00:07:57.516 14:44:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:57.516 14:44:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.516 14:44:57 -- accel/accel.sh@17 -- # local accel_module 00:07:57.516 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.516 14:44:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:57.516 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.516 14:44:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:57.516 14:44:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.516 14:44:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.516 14:44:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.516 14:44:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.516 14:44:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.516 14:44:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.516 14:44:57 -- accel/accel.sh@40 -- # local IFS=, 00:07:57.516 14:44:57 -- accel/accel.sh@41 -- # jq -r . 00:07:57.516 [2024-04-26 14:44:57.155815] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:57.516 [2024-04-26 14:44:57.155936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125002 ] 00:07:57.516 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.516 [2024-04-26 14:44:57.283780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.516 [2024-04-26 14:44:57.533293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.774 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.774 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.774 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.774 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.774 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.774 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.774 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=0x1 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=xor 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=2 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=software 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@22 -- # accel_module=software 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=32 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=32 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=1 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val=Yes 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:57.775 14:44:57 -- accel/accel.sh@20 -- # val= 00:07:57.775 14:44:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # IFS=: 00:07:57.775 14:44:57 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@20 -- # val= 00:07:59.677 14:44:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.677 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.677 14:44:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.677 14:44:59 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:59.677 14:44:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.677 00:07:59.677 real 0m2.597s 00:07:59.677 user 0m2.355s 00:07:59.677 sys 0m0.238s 00:07:59.677 14:44:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.677 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.677 ************************************ 00:07:59.677 END TEST accel_xor 00:07:59.677 ************************************ 00:07:59.677 14:44:59 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:59.677 14:44:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:59.677 14:44:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.677 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.936 ************************************ 00:07:59.936 START TEST accel_xor 00:07:59.936 ************************************ 00:07:59.936 14:44:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:59.936 14:44:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.936 14:44:59 -- accel/accel.sh@17 -- # local accel_module 00:07:59.936 14:44:59 -- accel/accel.sh@19 -- # IFS=: 00:07:59.936 14:44:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:59.936 14:44:59 -- accel/accel.sh@19 -- # read -r var val 00:07:59.936 14:44:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:59.936 14:44:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.936 14:44:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:59.936 14:44:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:59.936 14:44:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.936 14:44:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.936 14:44:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:59.936 14:44:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:59.936 14:44:59 -- accel/accel.sh@41 -- # jq -r . 00:07:59.936 [2024-04-26 14:44:59.868924] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:59.936 [2024-04-26 14:44:59.869033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125299 ] 00:07:59.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.936 [2024-04-26 14:44:59.997769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.194 [2024-04-26 14:45:00.248934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=0x1 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=xor 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=3 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=software 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@22 -- # accel_module=software 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=32 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=32 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=1 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val=Yes 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:00.453 14:45:00 -- accel/accel.sh@20 -- # val= 00:08:00.453 14:45:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # IFS=: 00:08:00.453 14:45:00 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@20 -- # val= 00:08:02.979 14:45:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.979 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.979 14:45:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.979 14:45:02 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:02.979 14:45:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.979 00:08:02.979 real 0m2.625s 00:08:02.979 user 0m2.398s 00:08:02.979 sys 0m0.221s 00:08:02.979 14:45:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.979 14:45:02 -- common/autotest_common.sh@10 -- # set +x 00:08:02.979 ************************************ 00:08:02.979 END TEST accel_xor 00:08:02.979 ************************************ 00:08:02.979 14:45:02 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:02.980 14:45:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:02.980 14:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.980 14:45:02 -- common/autotest_common.sh@10 -- # set +x 00:08:02.980 ************************************ 00:08:02.980 START TEST accel_dif_verify 00:08:02.980 ************************************ 00:08:02.980 14:45:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:02.980 14:45:02 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.980 14:45:02 -- accel/accel.sh@17 -- # local accel_module 00:08:02.980 14:45:02 -- accel/accel.sh@19 -- # IFS=: 00:08:02.980 14:45:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:02.980 14:45:02 -- accel/accel.sh@19 -- # read -r var val 00:08:02.980 14:45:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:02.980 14:45:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.980 14:45:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.980 14:45:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.980 14:45:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.980 14:45:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.980 14:45:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.980 14:45:02 -- accel/accel.sh@40 -- # local IFS=, 00:08:02.980 14:45:02 -- accel/accel.sh@41 -- # jq -r . 00:08:02.980 [2024-04-26 14:45:02.607953] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:02.980 [2024-04-26 14:45:02.608079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125777 ] 00:08:02.980 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.980 [2024-04-26 14:45:02.736182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.980 [2024-04-26 14:45:02.986419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=0x1 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=dif_verify 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=software 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@22 -- # accel_module=software 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=32 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=32 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=1 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val=No 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:03.238 14:45:03 -- accel/accel.sh@20 -- # val= 00:08:03.238 14:45:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # IFS=: 00:08:03.238 14:45:03 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.164 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.164 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.164 14:45:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.164 14:45:05 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:05.164 14:45:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.164 00:08:05.164 real 0m2.645s 00:08:05.164 user 0m2.398s 00:08:05.164 sys 0m0.243s 00:08:05.164 14:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.164 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:08:05.164 ************************************ 00:08:05.164 END TEST accel_dif_verify 00:08:05.164 ************************************ 00:08:05.164 14:45:05 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:05.164 14:45:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:05.164 14:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.164 14:45:05 -- common/autotest_common.sh@10 -- # set +x 00:08:05.423 ************************************ 00:08:05.423 START TEST accel_dif_generate 00:08:05.423 ************************************ 00:08:05.423 14:45:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:05.423 14:45:05 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.423 14:45:05 -- accel/accel.sh@17 -- # local accel_module 00:08:05.423 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.423 14:45:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:05.423 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.423 14:45:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:05.423 14:45:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.423 14:45:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.423 14:45:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.423 14:45:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.423 14:45:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.423 14:45:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.423 14:45:05 -- accel/accel.sh@40 -- # local IFS=, 00:08:05.423 14:45:05 -- accel/accel.sh@41 -- # jq -r . 00:08:05.423 [2024-04-26 14:45:05.386476] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:05.423 [2024-04-26 14:45:05.386616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126127 ] 00:08:05.423 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.680 [2024-04-26 14:45:05.519723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.938 [2024-04-26 14:45:05.773746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.938 14:45:05 -- accel/accel.sh@20 -- # val= 00:08:05.938 14:45:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.938 14:45:05 -- accel/accel.sh@19 -- # IFS=: 00:08:05.938 14:45:05 -- accel/accel.sh@19 -- # read -r var val 00:08:05.938 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.938 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.938 14:45:06 -- accel/accel.sh@20 -- # val=0x1 00:08:05.938 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.938 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.938 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.938 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.938 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.938 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.938 14:45:06 -- accel/accel.sh@20 -- # val=dif_generate 00:08:05.938 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val=software 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@22 -- # accel_module=software 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val=32 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val=32 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val=1 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val=No 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:05.939 14:45:06 -- accel/accel.sh@20 -- # val= 00:08:05.939 14:45:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # IFS=: 00:08:05.939 14:45:06 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@20 -- # val= 00:08:08.470 14:45:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:07 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.470 14:45:07 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:08.470 14:45:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.470 00:08:08.470 real 0m2.614s 00:08:08.470 user 0m2.383s 00:08:08.470 sys 0m0.227s 00:08:08.470 14:45:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.470 14:45:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.470 ************************************ 00:08:08.470 END TEST accel_dif_generate 00:08:08.470 ************************************ 00:08:08.470 14:45:07 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:08.470 14:45:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:08.470 14:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.470 14:45:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.470 ************************************ 00:08:08.470 START TEST accel_dif_generate_copy 00:08:08.470 ************************************ 00:08:08.470 14:45:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:08.470 14:45:08 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.470 14:45:08 -- accel/accel.sh@17 -- # local accel_module 00:08:08.470 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.470 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.470 14:45:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:08.470 14:45:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:08.470 14:45:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.470 14:45:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.470 14:45:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.470 14:45:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.470 14:45:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.470 14:45:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.470 14:45:08 -- accel/accel.sh@40 -- # local IFS=, 00:08:08.470 14:45:08 -- accel/accel.sh@41 -- # jq -r . 00:08:08.470 [2024-04-26 14:45:08.113292] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:08.470 [2024-04-26 14:45:08.113425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126741 ] 00:08:08.470 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.470 [2024-04-26 14:45:08.240171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.470 [2024-04-26 14:45:08.495077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=0x1 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=software 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@22 -- # accel_module=software 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=32 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=32 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=1 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.729 14:45:08 -- accel/accel.sh@20 -- # val=No 00:08:08.729 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.729 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.730 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.730 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.730 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.730 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:08.730 14:45:08 -- accel/accel.sh@20 -- # val= 00:08:08.730 14:45:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.730 14:45:08 -- accel/accel.sh@19 -- # IFS=: 00:08:08.730 14:45:08 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.632 14:45:10 -- accel/accel.sh@20 -- # val= 00:08:10.632 14:45:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.632 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.890 14:45:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.890 14:45:10 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:10.890 14:45:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.890 00:08:10.890 real 0m2.644s 00:08:10.890 user 0m2.393s 00:08:10.890 sys 0m0.245s 00:08:10.890 14:45:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.890 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:08:10.890 ************************************ 00:08:10.890 END TEST accel_dif_generate_copy 00:08:10.890 ************************************ 00:08:10.890 14:45:10 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:10.890 14:45:10 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.890 14:45:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:10.890 14:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.890 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:08:10.890 ************************************ 00:08:10.890 START TEST accel_comp 00:08:10.890 ************************************ 00:08:10.890 14:45:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.890 14:45:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.890 14:45:10 -- accel/accel.sh@17 -- # local accel_module 00:08:10.890 14:45:10 -- accel/accel.sh@19 -- # IFS=: 00:08:10.890 14:45:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.890 14:45:10 -- accel/accel.sh@19 -- # read -r var val 00:08:10.890 14:45:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.890 14:45:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.890 14:45:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.890 14:45:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.890 14:45:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.890 14:45:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.890 14:45:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.890 14:45:10 -- accel/accel.sh@40 -- # local IFS=, 00:08:10.890 14:45:10 -- accel/accel.sh@41 -- # jq -r . 00:08:10.890 [2024-04-26 14:45:10.882563] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:10.890 [2024-04-26 14:45:10.882682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127348 ] 00:08:10.890 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.149 [2024-04-26 14:45:11.012177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.407 [2024-04-26 14:45:11.265260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.665 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=0x1 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=compress 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=software 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=32 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=32 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=1 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val=No 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:11.666 14:45:11 -- accel/accel.sh@20 -- # val= 00:08:11.666 14:45:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # IFS=: 00:08:11.666 14:45:11 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@20 -- # val= 00:08:13.567 14:45:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.567 14:45:13 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:13.567 14:45:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.567 00:08:13.567 real 0m2.652s 00:08:13.567 user 0m2.414s 00:08:13.567 sys 0m0.236s 00:08:13.567 14:45:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.567 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:08:13.567 ************************************ 00:08:13.567 END TEST accel_comp 00:08:13.567 ************************************ 00:08:13.567 14:45:13 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:13.567 14:45:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:13.567 14:45:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.567 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:08:13.567 ************************************ 00:08:13.567 START TEST accel_decomp 00:08:13.567 ************************************ 00:08:13.567 14:45:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:13.567 14:45:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.567 14:45:13 -- accel/accel.sh@17 -- # local accel_module 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # IFS=: 00:08:13.567 14:45:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:13.567 14:45:13 -- accel/accel.sh@19 -- # read -r var val 00:08:13.567 14:45:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:13.567 14:45:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.567 14:45:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.567 14:45:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.567 14:45:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.567 14:45:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.567 14:45:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.567 14:45:13 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.567 14:45:13 -- accel/accel.sh@41 -- # jq -r . 00:08:13.825 [2024-04-26 14:45:13.650724] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:13.825 [2024-04-26 14:45:13.650850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127645 ] 00:08:13.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.825 [2024-04-26 14:45:13.779099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.083 [2024-04-26 14:45:14.034751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.340 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.340 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.340 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.340 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.340 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.340 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.340 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=0x1 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=decompress 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=software 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@22 -- # accel_module=software 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=32 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=32 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=1 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val=Yes 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:14.341 14:45:14 -- accel/accel.sh@20 -- # val= 00:08:14.341 14:45:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # IFS=: 00:08:14.341 14:45:14 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@20 -- # val= 00:08:16.238 14:45:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.238 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.238 14:45:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.238 14:45:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.238 14:45:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.238 00:08:16.238 real 0m2.650s 00:08:16.238 user 0m2.418s 00:08:16.238 sys 0m0.228s 00:08:16.238 14:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.238 14:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:16.238 ************************************ 00:08:16.238 END TEST accel_decomp 00:08:16.238 ************************************ 00:08:16.238 14:45:16 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:16.238 14:45:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:16.238 14:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.238 14:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:16.496 ************************************ 00:08:16.496 START TEST accel_decmop_full 00:08:16.496 ************************************ 00:08:16.496 14:45:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:16.496 14:45:16 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.496 14:45:16 -- accel/accel.sh@17 -- # local accel_module 00:08:16.496 14:45:16 -- accel/accel.sh@19 -- # IFS=: 00:08:16.496 14:45:16 -- accel/accel.sh@19 -- # read -r var val 00:08:16.496 14:45:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:16.496 14:45:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:16.496 14:45:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.496 14:45:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.496 14:45:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.497 14:45:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.497 14:45:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.497 14:45:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.497 14:45:16 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.497 14:45:16 -- accel/accel.sh@41 -- # jq -r . 00:08:16.497 [2024-04-26 14:45:16.415502] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:16.497 [2024-04-26 14:45:16.415628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128063 ] 00:08:16.497 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.497 [2024-04-26 14:45:16.541873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.755 [2024-04-26 14:45:16.795932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=0x1 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=decompress 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=software 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@22 -- # accel_module=software 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=32 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=32 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=1 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val=Yes 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:17.013 14:45:17 -- accel/accel.sh@20 -- # val= 00:08:17.013 14:45:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # IFS=: 00:08:17.013 14:45:17 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.541 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.541 14:45:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.541 14:45:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.541 00:08:19.541 real 0m2.666s 00:08:19.541 user 0m2.423s 00:08:19.541 sys 0m0.240s 00:08:19.541 14:45:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:19.541 14:45:19 -- common/autotest_common.sh@10 -- # set +x 00:08:19.541 ************************************ 00:08:19.541 END TEST accel_decmop_full 00:08:19.541 ************************************ 00:08:19.541 14:45:19 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.541 14:45:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:19.541 14:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.541 14:45:19 -- common/autotest_common.sh@10 -- # set +x 00:08:19.541 ************************************ 00:08:19.541 START TEST accel_decomp_mcore 00:08:19.541 ************************************ 00:08:19.541 14:45:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.541 14:45:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.541 14:45:19 -- accel/accel.sh@17 -- # local accel_module 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.541 14:45:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.541 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.541 14:45:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:19.541 14:45:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.541 14:45:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.541 14:45:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.541 14:45:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.541 14:45:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.541 14:45:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.541 14:45:19 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.541 14:45:19 -- accel/accel.sh@41 -- # jq -r . 00:08:19.541 [2024-04-26 14:45:19.208945] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:19.541 [2024-04-26 14:45:19.209058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128360 ] 00:08:19.541 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.541 [2024-04-26 14:45:19.338012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.541 [2024-04-26 14:45:19.597376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.541 [2024-04-26 14:45:19.597433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.541 [2024-04-26 14:45:19.597481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.541 [2024-04-26 14:45:19.597484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=0xf 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=decompress 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=software 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@22 -- # accel_module=software 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=32 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=32 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=1 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.799 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.799 14:45:19 -- accel/accel.sh@20 -- # val=Yes 00:08:19.799 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.800 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.800 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:19.800 14:45:19 -- accel/accel.sh@20 -- # val= 00:08:19.800 14:45:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # IFS=: 00:08:19.800 14:45:19 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@20 -- # val= 00:08:22.334 14:45:21 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.334 14:45:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:22.334 14:45:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.334 00:08:22.334 real 0m2.674s 00:08:22.334 user 0m7.829s 00:08:22.334 sys 0m0.257s 00:08:22.334 14:45:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:22.334 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:08:22.334 ************************************ 00:08:22.334 END TEST accel_decomp_mcore 00:08:22.334 ************************************ 00:08:22.334 14:45:21 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.334 14:45:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:22.334 14:45:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.334 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:08:22.334 ************************************ 00:08:22.334 START TEST accel_decomp_full_mcore 00:08:22.334 ************************************ 00:08:22.334 14:45:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.334 14:45:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.334 14:45:21 -- accel/accel.sh@17 -- # local accel_module 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # IFS=: 00:08:22.334 14:45:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.334 14:45:21 -- accel/accel.sh@19 -- # read -r var val 00:08:22.334 14:45:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:22.334 14:45:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.334 14:45:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.334 14:45:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.334 14:45:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.334 14:45:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.334 14:45:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.334 14:45:21 -- accel/accel.sh@40 -- # local IFS=, 00:08:22.334 14:45:21 -- accel/accel.sh@41 -- # jq -r . 00:08:22.334 [2024-04-26 14:45:22.003928] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:22.334 [2024-04-26 14:45:22.004069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128773 ] 00:08:22.334 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.334 [2024-04-26 14:45:22.147750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.334 [2024-04-26 14:45:22.406165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.334 [2024-04-26 14:45:22.406201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.334 [2024-04-26 14:45:22.406256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.334 [2024-04-26 14:45:22.406258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=0xf 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=decompress 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=software 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@22 -- # accel_module=software 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=32 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=32 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=1 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val=Yes 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:22.599 14:45:22 -- accel/accel.sh@20 -- # val= 00:08:22.599 14:45:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # IFS=: 00:08:22.599 14:45:22 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@20 -- # val= 00:08:25.126 14:45:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.126 14:45:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:25.126 14:45:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.126 00:08:25.126 real 0m2.723s 00:08:25.126 user 0m0.013s 00:08:25.126 sys 0m0.002s 00:08:25.126 14:45:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.126 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.126 ************************************ 00:08:25.126 END TEST accel_decomp_full_mcore 00:08:25.126 ************************************ 00:08:25.126 14:45:24 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.126 14:45:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:25.126 14:45:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.126 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:25.126 ************************************ 00:08:25.126 START TEST accel_decomp_mthread 00:08:25.126 ************************************ 00:08:25.126 14:45:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.126 14:45:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.126 14:45:24 -- accel/accel.sh@17 -- # local accel_module 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # IFS=: 00:08:25.126 14:45:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.126 14:45:24 -- accel/accel.sh@19 -- # read -r var val 00:08:25.126 14:45:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:25.126 14:45:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.126 14:45:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.126 14:45:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.126 14:45:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.126 14:45:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.126 14:45:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.126 14:45:24 -- accel/accel.sh@40 -- # local IFS=, 00:08:25.126 14:45:24 -- accel/accel.sh@41 -- # jq -r . 00:08:25.126 [2024-04-26 14:45:24.839424] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:25.126 [2024-04-26 14:45:24.839575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129083 ] 00:08:25.126 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.126 [2024-04-26 14:45:24.962413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.126 [2024-04-26 14:45:25.206416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.384 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.384 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.384 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.384 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=0x1 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=decompress 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=software 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@22 -- # accel_module=software 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=32 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=32 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=2 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val=Yes 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:25.385 14:45:25 -- accel/accel.sh@20 -- # val= 00:08:25.385 14:45:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # IFS=: 00:08:25.385 14:45:25 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@20 -- # val= 00:08:27.915 14:45:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.915 14:45:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:27.915 14:45:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.915 00:08:27.915 real 0m2.635s 00:08:27.915 user 0m2.409s 00:08:27.915 sys 0m0.224s 00:08:27.915 14:45:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.915 14:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:27.915 ************************************ 00:08:27.915 END TEST accel_decomp_mthread 00:08:27.915 ************************************ 00:08:27.915 14:45:27 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:27.915 14:45:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:27.915 14:45:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.915 14:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:27.915 ************************************ 00:08:27.915 START TEST accel_deomp_full_mthread 00:08:27.915 ************************************ 00:08:27.915 14:45:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:27.915 14:45:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:27.915 14:45:27 -- accel/accel.sh@17 -- # local accel_module 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # IFS=: 00:08:27.915 14:45:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:27.915 14:45:27 -- accel/accel.sh@19 -- # read -r var val 00:08:27.915 14:45:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:27.915 14:45:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:27.915 14:45:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.915 14:45:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.915 14:45:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.915 14:45:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.915 14:45:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.915 14:45:27 -- accel/accel.sh@40 -- # local IFS=, 00:08:27.915 14:45:27 -- accel/accel.sh@41 -- # jq -r . 00:08:27.915 [2024-04-26 14:45:27.596940] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:27.915 [2024-04-26 14:45:27.597050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129447 ] 00:08:27.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.915 [2024-04-26 14:45:27.724845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.915 [2024-04-26 14:45:27.976714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.173 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=0x1 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=decompress 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=software 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@22 -- # accel_module=software 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=32 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=32 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=2 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val=Yes 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:28.174 14:45:28 -- accel/accel.sh@20 -- # val= 00:08:28.174 14:45:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # IFS=: 00:08:28.174 14:45:28 -- accel/accel.sh@19 -- # read -r var val 00:08:30.702 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.702 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.702 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.702 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.702 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.702 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.702 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.702 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.703 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.703 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.703 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.703 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.703 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.703 14:45:30 -- accel/accel.sh@20 -- # val= 00:08:30.703 14:45:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # IFS=: 00:08:30.703 14:45:30 -- accel/accel.sh@19 -- # read -r var val 00:08:30.703 14:45:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.703 14:45:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.703 14:45:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.703 00:08:30.703 real 0m2.697s 00:08:30.703 user 0m2.460s 00:08:30.703 sys 0m0.236s 00:08:30.703 14:45:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.703 14:45:30 -- common/autotest_common.sh@10 -- # set +x 00:08:30.703 ************************************ 00:08:30.703 END TEST accel_deomp_full_mthread 00:08:30.703 ************************************ 00:08:30.703 14:45:30 -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:30.703 14:45:30 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:30.703 14:45:30 -- accel/accel.sh@137 -- # build_accel_config 00:08:30.703 14:45:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.703 14:45:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.703 14:45:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.703 14:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.703 14:45:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.703 14:45:30 -- common/autotest_common.sh@10 -- # set +x 00:08:30.703 14:45:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.703 14:45:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.703 14:45:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:30.703 14:45:30 -- accel/accel.sh@41 -- # jq -r . 00:08:30.703 ************************************ 00:08:30.703 START TEST accel_dif_functional_tests 00:08:30.703 ************************************ 00:08:30.703 14:45:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:30.703 [2024-04-26 14:45:30.457256] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:30.703 [2024-04-26 14:45:30.457405] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129793 ] 00:08:30.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.703 [2024-04-26 14:45:30.601812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.961 [2024-04-26 14:45:30.859518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.961 [2024-04-26 14:45:30.859569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.961 [2024-04-26 14:45:30.859576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.220 00:08:31.220 00:08:31.220 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.220 http://cunit.sourceforge.net/ 00:08:31.220 00:08:31.220 00:08:31.220 Suite: accel_dif 00:08:31.220 Test: verify: DIF generated, GUARD check ...passed 00:08:31.220 Test: verify: DIF generated, APPTAG check ...passed 00:08:31.220 Test: verify: DIF generated, REFTAG check ...passed 00:08:31.220 Test: verify: DIF not generated, GUARD check ...[2024-04-26 14:45:31.205921] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.220 [2024-04-26 14:45:31.205999] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:31.220 passed 00:08:31.220 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 14:45:31.206072] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.220 [2024-04-26 14:45:31.206121] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:31.220 passed 00:08:31.220 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 14:45:31.206198] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.220 [2024-04-26 14:45:31.206246] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:31.220 passed 00:08:31.220 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:31.220 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 14:45:31.206370] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:31.220 passed 00:08:31.220 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:31.220 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:31.220 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:31.220 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 14:45:31.206635] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:31.220 passed 00:08:31.220 Test: generate copy: DIF generated, GUARD check ...passed 00:08:31.220 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:31.220 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:31.220 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:31.220 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:31.220 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:31.220 Test: generate copy: iovecs-len validate ...[2024-04-26 14:45:31.207109] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:31.220 passed 00:08:31.220 Test: generate copy: buffer alignment validate ...passed 00:08:31.220 00:08:31.220 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.220 suites 1 1 n/a 0 0 00:08:31.220 tests 20 20 20 0 0 00:08:31.220 asserts 204 204 204 0 n/a 00:08:31.220 00:08:31.220 Elapsed time = 0.005 seconds 00:08:32.596 00:08:32.596 real 0m2.121s 00:08:32.596 user 0m4.162s 00:08:32.596 sys 0m0.312s 00:08:32.596 14:45:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.596 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 ************************************ 00:08:32.596 END TEST accel_dif_functional_tests 00:08:32.596 ************************************ 00:08:32.596 00:08:32.596 real 1m5.337s 00:08:32.596 user 1m11.067s 00:08:32.596 sys 0m8.012s 00:08:32.596 14:45:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.596 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 ************************************ 00:08:32.596 END TEST accel 00:08:32.596 ************************************ 00:08:32.596 14:45:32 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:32.596 14:45:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.596 14:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.596 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:08:32.596 ************************************ 00:08:32.596 START TEST accel_rpc 00:08:32.596 ************************************ 00:08:32.596 14:45:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:32.855 * Looking for test storage... 00:08:32.855 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:32.855 14:45:32 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:32.855 14:45:32 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=130134 00:08:32.855 14:45:32 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:32.855 14:45:32 -- accel/accel_rpc.sh@15 -- # waitforlisten 130134 00:08:32.855 14:45:32 -- common/autotest_common.sh@817 -- # '[' -z 130134 ']' 00:08:32.855 14:45:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.855 14:45:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:32.855 14:45:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.855 14:45:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:32.855 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:08:32.855 [2024-04-26 14:45:32.803062] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:32.855 [2024-04-26 14:45:32.803216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130134 ] 00:08:32.855 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.115 [2024-04-26 14:45:32.943923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.373 [2024-04-26 14:45:33.196499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.938 14:45:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:33.938 14:45:33 -- common/autotest_common.sh@850 -- # return 0 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:33.938 14:45:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.938 14:45:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.938 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 ************************************ 00:08:33.938 START TEST accel_assign_opcode 00:08:33.938 ************************************ 00:08:33.938 14:45:33 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:33.938 14:45:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.938 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 [2024-04-26 14:45:33.835036] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:33.938 14:45:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.938 14:45:33 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:33.938 14:45:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.938 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 [2024-04-26 14:45:33.843067] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:33.939 14:45:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.939 14:45:33 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:33.939 14:45:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.939 14:45:33 -- common/autotest_common.sh@10 -- # set +x 00:08:34.872 14:45:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.872 14:45:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:34.872 14:45:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.872 14:45:34 -- common/autotest_common.sh@10 -- # set +x 00:08:34.872 14:45:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:34.872 14:45:34 -- accel/accel_rpc.sh@42 -- # grep software 00:08:34.872 14:45:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.872 software 00:08:34.872 00:08:34.872 real 0m0.907s 00:08:34.872 user 0m0.039s 00:08:34.872 sys 0m0.008s 00:08:34.872 14:45:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.872 14:45:34 -- common/autotest_common.sh@10 -- # set +x 00:08:34.872 ************************************ 00:08:34.872 END TEST accel_assign_opcode 00:08:34.872 ************************************ 00:08:34.872 14:45:34 -- accel/accel_rpc.sh@55 -- # killprocess 130134 00:08:34.872 14:45:34 -- common/autotest_common.sh@936 -- # '[' -z 130134 ']' 00:08:34.872 14:45:34 -- common/autotest_common.sh@940 -- # kill -0 130134 00:08:34.872 14:45:34 -- common/autotest_common.sh@941 -- # uname 00:08:34.872 14:45:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.872 14:45:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130134 00:08:34.872 14:45:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.872 14:45:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.872 14:45:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130134' 00:08:34.872 killing process with pid 130134 00:08:34.872 14:45:34 -- common/autotest_common.sh@955 -- # kill 130134 00:08:34.872 14:45:34 -- common/autotest_common.sh@960 -- # wait 130134 00:08:37.431 00:08:37.431 real 0m4.586s 00:08:37.431 user 0m4.595s 00:08:37.431 sys 0m0.665s 00:08:37.431 14:45:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.431 14:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 ************************************ 00:08:37.431 END TEST accel_rpc 00:08:37.431 ************************************ 00:08:37.431 14:45:37 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.431 14:45:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.431 14:45:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.431 14:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:37.431 ************************************ 00:08:37.431 START TEST app_cmdline 00:08:37.431 ************************************ 00:08:37.432 14:45:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.432 * Looking for test storage... 00:08:37.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:37.432 14:45:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:37.432 14:45:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=130753 00:08:37.432 14:45:37 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:37.432 14:45:37 -- app/cmdline.sh@18 -- # waitforlisten 130753 00:08:37.432 14:45:37 -- common/autotest_common.sh@817 -- # '[' -z 130753 ']' 00:08:37.432 14:45:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.432 14:45:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:37.432 14:45:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.432 14:45:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:37.432 14:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:37.432 [2024-04-26 14:45:37.501411] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:37.432 [2024-04-26 14:45:37.501550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130753 ] 00:08:37.690 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.690 [2024-04-26 14:45:37.670194] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.948 [2024-04-26 14:45:37.955226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.881 14:45:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:38.881 14:45:38 -- common/autotest_common.sh@850 -- # return 0 00:08:38.881 14:45:38 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:39.138 { 00:08:39.138 "version": "SPDK v24.05-pre git sha1 8571999d8", 00:08:39.138 "fields": { 00:08:39.138 "major": 24, 00:08:39.138 "minor": 5, 00:08:39.138 "patch": 0, 00:08:39.138 "suffix": "-pre", 00:08:39.138 "commit": "8571999d8" 00:08:39.138 } 00:08:39.138 } 00:08:39.138 14:45:39 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:39.138 14:45:39 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:39.138 14:45:39 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:39.138 14:45:39 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:39.138 14:45:39 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:39.138 14:45:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.138 14:45:39 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:39.138 14:45:39 -- common/autotest_common.sh@10 -- # set +x 00:08:39.138 14:45:39 -- app/cmdline.sh@26 -- # sort 00:08:39.138 14:45:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.138 14:45:39 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:39.139 14:45:39 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:39.139 14:45:39 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.139 14:45:39 -- common/autotest_common.sh@638 -- # local es=0 00:08:39.139 14:45:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.139 14:45:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.139 14:45:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.139 14:45:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.139 14:45:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.139 14:45:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.139 14:45:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.139 14:45:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:39.139 14:45:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.139 14:45:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:39.396 request: 00:08:39.396 { 00:08:39.396 "method": "env_dpdk_get_mem_stats", 00:08:39.396 "req_id": 1 00:08:39.396 } 00:08:39.396 Got JSON-RPC error response 00:08:39.396 response: 00:08:39.396 { 00:08:39.396 "code": -32601, 00:08:39.396 "message": "Method not found" 00:08:39.396 } 00:08:39.396 14:45:39 -- common/autotest_common.sh@641 -- # es=1 00:08:39.396 14:45:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:39.396 14:45:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:39.396 14:45:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:39.396 14:45:39 -- app/cmdline.sh@1 -- # killprocess 130753 00:08:39.396 14:45:39 -- common/autotest_common.sh@936 -- # '[' -z 130753 ']' 00:08:39.396 14:45:39 -- common/autotest_common.sh@940 -- # kill -0 130753 00:08:39.396 14:45:39 -- common/autotest_common.sh@941 -- # uname 00:08:39.396 14:45:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.396 14:45:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130753 00:08:39.396 14:45:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.396 14:45:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.396 14:45:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130753' 00:08:39.396 killing process with pid 130753 00:08:39.396 14:45:39 -- common/autotest_common.sh@955 -- # kill 130753 00:08:39.396 14:45:39 -- common/autotest_common.sh@960 -- # wait 130753 00:08:41.926 00:08:41.926 real 0m4.622s 00:08:41.926 user 0m5.065s 00:08:41.926 sys 0m0.746s 00:08:41.926 14:45:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.926 14:45:41 -- common/autotest_common.sh@10 -- # set +x 00:08:41.926 ************************************ 00:08:41.926 END TEST app_cmdline 00:08:41.926 ************************************ 00:08:42.185 14:45:42 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:42.185 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.185 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.185 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.185 ************************************ 00:08:42.185 START TEST version 00:08:42.185 ************************************ 00:08:42.185 14:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:42.185 * Looking for test storage... 00:08:42.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:42.185 14:45:42 -- app/version.sh@17 -- # get_header_version major 00:08:42.185 14:45:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:42.185 14:45:42 -- app/version.sh@14 -- # cut -f2 00:08:42.185 14:45:42 -- app/version.sh@14 -- # tr -d '"' 00:08:42.185 14:45:42 -- app/version.sh@17 -- # major=24 00:08:42.185 14:45:42 -- app/version.sh@18 -- # get_header_version minor 00:08:42.185 14:45:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:42.185 14:45:42 -- app/version.sh@14 -- # cut -f2 00:08:42.185 14:45:42 -- app/version.sh@14 -- # tr -d '"' 00:08:42.185 14:45:42 -- app/version.sh@18 -- # minor=5 00:08:42.185 14:45:42 -- app/version.sh@19 -- # get_header_version patch 00:08:42.185 14:45:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:42.185 14:45:42 -- app/version.sh@14 -- # cut -f2 00:08:42.185 14:45:42 -- app/version.sh@14 -- # tr -d '"' 00:08:42.185 14:45:42 -- app/version.sh@19 -- # patch=0 00:08:42.185 14:45:42 -- app/version.sh@20 -- # get_header_version suffix 00:08:42.185 14:45:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:42.185 14:45:42 -- app/version.sh@14 -- # cut -f2 00:08:42.185 14:45:42 -- app/version.sh@14 -- # tr -d '"' 00:08:42.185 14:45:42 -- app/version.sh@20 -- # suffix=-pre 00:08:42.185 14:45:42 -- app/version.sh@22 -- # version=24.5 00:08:42.185 14:45:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.185 14:45:42 -- app/version.sh@28 -- # version=24.5rc0 00:08:42.185 14:45:42 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:42.185 14:45:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:42.185 14:45:42 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:42.185 14:45:42 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:42.185 00:08:42.185 real 0m0.114s 00:08:42.185 user 0m0.061s 00:08:42.185 sys 0m0.075s 00:08:42.185 14:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:42.185 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.185 ************************************ 00:08:42.185 END TEST version 00:08:42.185 ************************************ 00:08:42.185 14:45:42 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:42.185 14:45:42 -- spdk/autotest.sh@194 -- # uname -s 00:08:42.185 14:45:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:42.185 14:45:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:42.185 14:45:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:42.185 14:45:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:42.185 14:45:42 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:42.185 14:45:42 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:42.185 14:45:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:42.185 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.443 14:45:42 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:42.443 14:45:42 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:42.443 14:45:42 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:08:42.443 14:45:42 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:08:42.443 14:45:42 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:08:42.443 14:45:42 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:42.443 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.443 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.443 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.443 ************************************ 00:08:42.443 START TEST nvmf_rdma 00:08:42.443 ************************************ 00:08:42.444 14:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:42.444 * Looking for test storage... 00:08:42.444 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.444 14:45:42 -- nvmf/common.sh@7 -- # uname -s 00:08:42.444 14:45:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.444 14:45:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.444 14:45:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.444 14:45:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.444 14:45:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.444 14:45:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.444 14:45:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.444 14:45:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.444 14:45:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.444 14:45:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.444 14:45:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.444 14:45:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.444 14:45:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.444 14:45:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.444 14:45:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.444 14:45:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.444 14:45:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.444 14:45:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.444 14:45:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.444 14:45:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.444 14:45:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.444 14:45:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.444 14:45:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.444 14:45:42 -- paths/export.sh@5 -- # export PATH 00:08:42.444 14:45:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.444 14:45:42 -- nvmf/common.sh@47 -- # : 0 00:08:42.444 14:45:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.444 14:45:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.444 14:45:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.444 14:45:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.444 14:45:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.444 14:45:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.444 14:45:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.444 14:45:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:42.444 14:45:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.444 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:42.444 14:45:42 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:42.444 14:45:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.444 14:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.444 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.703 ************************************ 00:08:42.703 START TEST nvmf_example 00:08:42.703 ************************************ 00:08:42.703 14:45:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:42.703 * Looking for test storage... 00:08:42.703 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:42.703 14:45:42 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.703 14:45:42 -- nvmf/common.sh@7 -- # uname -s 00:08:42.703 14:45:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.703 14:45:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.703 14:45:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.703 14:45:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.703 14:45:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.703 14:45:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.703 14:45:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.703 14:45:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.703 14:45:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.703 14:45:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.703 14:45:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.703 14:45:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:42.703 14:45:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.703 14:45:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.703 14:45:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.703 14:45:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.703 14:45:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.703 14:45:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.703 14:45:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.703 14:45:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.703 14:45:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.703 14:45:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.703 14:45:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.703 14:45:42 -- paths/export.sh@5 -- # export PATH 00:08:42.704 14:45:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.704 14:45:42 -- nvmf/common.sh@47 -- # : 0 00:08:42.704 14:45:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.704 14:45:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.704 14:45:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.704 14:45:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.704 14:45:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.704 14:45:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.704 14:45:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.704 14:45:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.704 14:45:42 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:42.704 14:45:42 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:42.704 14:45:42 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:42.704 14:45:42 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:42.704 14:45:42 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:42.704 14:45:42 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:42.704 14:45:42 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:42.704 14:45:42 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:42.704 14:45:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.704 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.704 14:45:42 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:42.704 14:45:42 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:42.704 14:45:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.704 14:45:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:42.704 14:45:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:42.704 14:45:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:42.704 14:45:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.704 14:45:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.704 14:45:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.704 14:45:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:42.704 14:45:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:42.704 14:45:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.704 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 14:45:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:44.605 14:45:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.605 14:45:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.605 14:45:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.605 14:45:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.605 14:45:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.605 14:45:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.605 14:45:44 -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.605 14:45:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.605 14:45:44 -- nvmf/common.sh@296 -- # e810=() 00:08:44.605 14:45:44 -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.605 14:45:44 -- nvmf/common.sh@297 -- # x722=() 00:08:44.605 14:45:44 -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.605 14:45:44 -- nvmf/common.sh@298 -- # mlx=() 00:08:44.605 14:45:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.605 14:45:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.605 14:45:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.605 14:45:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:44.605 14:45:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:44.605 14:45:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:44.605 14:45:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:44.605 14:45:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:44.605 14:45:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.605 14:45:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.605 14:45:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:44.605 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:44.605 14:45:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:44.605 14:45:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:44.605 14:45:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:44.605 14:45:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:44.605 14:45:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.605 14:45:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:44.606 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:44.606 14:45:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:44.606 14:45:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.606 14:45:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.606 14:45:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:44.606 Found net devices under 0000:09:00.0: mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.606 14:45:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.606 14:45:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.606 14:45:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:44.606 Found net devices under 0000:09:00.1: mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.606 14:45:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:44.606 14:45:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:44.606 14:45:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:44.606 14:45:44 -- nvmf/common.sh@58 -- # uname 00:08:44.606 14:45:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:44.606 14:45:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:44.606 14:45:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:44.606 14:45:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:44.606 14:45:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:44.606 14:45:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:44.606 14:45:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:44.606 14:45:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:44.606 14:45:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:44.606 14:45:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:44.606 14:45:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:44.606 14:45:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:44.606 14:45:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:44.606 14:45:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:44.606 14:45:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:44.606 14:45:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@105 -- # continue 2 00:08:44.606 14:45:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@105 -- # continue 2 00:08:44.606 14:45:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:44.606 14:45:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:44.606 14:45:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:44.606 14:45:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:44.606 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:44.606 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:08:44.606 altname enp9s0f0np0 00:08:44.606 inet 192.168.100.8/24 scope global mlx_0_0 00:08:44.606 valid_lft forever preferred_lft forever 00:08:44.606 14:45:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:44.606 14:45:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:44.606 14:45:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:44.606 14:45:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:44.606 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:44.606 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:08:44.606 altname enp9s0f1np1 00:08:44.606 inet 192.168.100.9/24 scope global mlx_0_1 00:08:44.606 valid_lft forever preferred_lft forever 00:08:44.606 14:45:44 -- nvmf/common.sh@411 -- # return 0 00:08:44.606 14:45:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:44.606 14:45:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:44.606 14:45:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:44.606 14:45:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:44.606 14:45:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:44.606 14:45:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:44.606 14:45:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:44.606 14:45:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:44.606 14:45:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:44.606 14:45:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@105 -- # continue 2 00:08:44.606 14:45:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:44.606 14:45:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:44.606 14:45:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@105 -- # continue 2 00:08:44.606 14:45:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:44.606 14:45:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:44.606 14:45:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:44.606 14:45:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:44.606 14:45:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:44.606 14:45:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:44.606 192.168.100.9' 00:08:44.606 14:45:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:44.606 192.168.100.9' 00:08:44.606 14:45:44 -- nvmf/common.sh@446 -- # head -n 1 00:08:44.606 14:45:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:44.606 14:45:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:44.606 192.168.100.9' 00:08:44.607 14:45:44 -- nvmf/common.sh@447 -- # tail -n +2 00:08:44.607 14:45:44 -- nvmf/common.sh@447 -- # head -n 1 00:08:44.607 14:45:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:44.607 14:45:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:44.607 14:45:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:44.607 14:45:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:44.607 14:45:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:44.607 14:45:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:44.607 14:45:44 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:44.607 14:45:44 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:44.607 14:45:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:44.607 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:08:44.607 14:45:44 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:44.607 14:45:44 -- target/nvmf_example.sh@34 -- # nvmfpid=133091 00:08:44.607 14:45:44 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:44.607 14:45:44 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.607 14:45:44 -- target/nvmf_example.sh@36 -- # waitforlisten 133091 00:08:44.607 14:45:44 -- common/autotest_common.sh@817 -- # '[' -z 133091 ']' 00:08:44.607 14:45:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.607 14:45:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:44.607 14:45:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.607 14:45:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:44.607 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:08:44.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.542 14:45:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:45.542 14:45:45 -- common/autotest_common.sh@850 -- # return 0 00:08:45.542 14:45:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:45.542 14:45:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:45.542 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:08:45.542 14:45:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:45.542 14:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.542 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 14:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.107 14:45:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:46.107 14:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.107 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 14:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.107 14:45:46 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:46.107 14:45:46 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.107 14:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.107 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 14:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.107 14:45:46 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:46.107 14:45:46 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.107 14:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.107 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 14:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.107 14:45:46 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:46.107 14:45:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.107 14:45:46 -- common/autotest_common.sh@10 -- # set +x 00:08:46.107 14:45:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.107 14:45:46 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:46.107 14:45:46 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:46.107 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.307 Initializing NVMe Controllers 00:08:58.307 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.307 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:58.307 Initialization complete. Launching workers. 00:08:58.307 ======================================================== 00:08:58.307 Latency(us) 00:08:58.307 Device Information : IOPS MiB/s Average min max 00:08:58.307 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16326.70 63.78 3922.39 1172.47 14769.29 00:08:58.307 ======================================================== 00:08:58.307 Total : 16326.70 63.78 3922.39 1172.47 14769.29 00:08:58.307 00:08:58.307 14:45:57 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:58.307 14:45:57 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:58.307 14:45:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:58.307 14:45:57 -- nvmf/common.sh@117 -- # sync 00:08:58.307 14:45:57 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:58.307 14:45:57 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:58.307 14:45:57 -- nvmf/common.sh@120 -- # set +e 00:08:58.307 14:45:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.307 14:45:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:58.307 rmmod nvme_rdma 00:08:58.307 rmmod nvme_fabrics 00:08:58.307 14:45:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.307 14:45:57 -- nvmf/common.sh@124 -- # set -e 00:08:58.307 14:45:57 -- nvmf/common.sh@125 -- # return 0 00:08:58.307 14:45:57 -- nvmf/common.sh@478 -- # '[' -n 133091 ']' 00:08:58.307 14:45:57 -- nvmf/common.sh@479 -- # killprocess 133091 00:08:58.307 14:45:57 -- common/autotest_common.sh@936 -- # '[' -z 133091 ']' 00:08:58.307 14:45:57 -- common/autotest_common.sh@940 -- # kill -0 133091 00:08:58.307 14:45:57 -- common/autotest_common.sh@941 -- # uname 00:08:58.307 14:45:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:58.307 14:45:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133091 00:08:58.307 14:45:57 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:58.307 14:45:57 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:58.307 14:45:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133091' 00:08:58.307 killing process with pid 133091 00:08:58.307 14:45:57 -- common/autotest_common.sh@955 -- # kill 133091 00:08:58.307 14:45:57 -- common/autotest_common.sh@960 -- # wait 133091 00:08:58.307 [2024-04-26 14:45:58.008048] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:59.241 nvmf threads initialize successfully 00:08:59.241 bdev subsystem init successfully 00:08:59.241 created a nvmf target service 00:08:59.241 create targets's poll groups done 00:08:59.241 all subsystems of target started 00:08:59.241 nvmf target is running 00:08:59.241 all subsystems of target stopped 00:08:59.241 destroy targets's poll groups done 00:08:59.241 destroyed the nvmf target service 00:08:59.241 bdev subsystem finish successfully 00:08:59.241 nvmf threads destroy successfully 00:08:59.241 14:45:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:59.241 14:45:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:59.241 14:45:59 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:59.241 14:45:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:59.241 14:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:59.241 00:08:59.241 real 0m16.746s 00:08:59.241 user 0m56.720s 00:08:59.241 sys 0m1.994s 00:08:59.241 14:45:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:59.241 14:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:59.241 ************************************ 00:08:59.241 END TEST nvmf_example 00:08:59.241 ************************************ 00:08:59.241 14:45:59 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:59.241 14:45:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.241 14:45:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.241 14:45:59 -- common/autotest_common.sh@10 -- # set +x 00:08:59.503 ************************************ 00:08:59.503 START TEST nvmf_filesystem 00:08:59.503 ************************************ 00:08:59.503 14:45:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:59.503 * Looking for test storage... 00:08:59.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.503 14:45:59 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:59.503 14:45:59 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:59.503 14:45:59 -- common/autotest_common.sh@34 -- # set -e 00:08:59.503 14:45:59 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:59.503 14:45:59 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:59.503 14:45:59 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:59.503 14:45:59 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:59.503 14:45:59 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:59.503 14:45:59 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:59.503 14:45:59 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:59.503 14:45:59 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:59.503 14:45:59 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:59.503 14:45:59 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:59.503 14:45:59 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:59.503 14:45:59 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:59.503 14:45:59 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:59.503 14:45:59 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:59.503 14:45:59 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:59.503 14:45:59 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:59.503 14:45:59 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:59.503 14:45:59 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:59.503 14:45:59 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:59.503 14:45:59 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:59.503 14:45:59 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:59.503 14:45:59 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:59.503 14:45:59 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:59.503 14:45:59 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:59.503 14:45:59 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:59.503 14:45:59 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:59.503 14:45:59 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:59.503 14:45:59 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:59.503 14:45:59 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:59.503 14:45:59 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:59.503 14:45:59 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:59.503 14:45:59 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:59.503 14:45:59 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:59.503 14:45:59 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:59.503 14:45:59 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:59.503 14:45:59 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:59.503 14:45:59 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:59.503 14:45:59 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:59.503 14:45:59 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:59.503 14:45:59 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:59.503 14:45:59 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:59.503 14:45:59 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:59.503 14:45:59 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:59.503 14:45:59 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:59.503 14:45:59 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:59.503 14:45:59 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:59.503 14:45:59 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:59.503 14:45:59 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:59.503 14:45:59 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:59.503 14:45:59 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:59.503 14:45:59 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:59.503 14:45:59 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:59.503 14:45:59 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:59.503 14:45:59 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:59.503 14:45:59 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:59.503 14:45:59 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:59.503 14:45:59 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:59.503 14:45:59 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:59.503 14:45:59 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:59.503 14:45:59 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:59.503 14:45:59 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:08:59.503 14:45:59 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:59.503 14:45:59 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:59.503 14:45:59 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:59.503 14:45:59 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:59.503 14:45:59 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:59.504 14:45:59 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:59.504 14:45:59 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:59.504 14:45:59 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:59.504 14:45:59 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:59.504 14:45:59 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:59.504 14:45:59 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:59.504 14:45:59 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:59.504 14:45:59 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:59.504 14:45:59 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:59.504 14:45:59 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:59.504 14:45:59 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:59.504 14:45:59 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:59.504 14:45:59 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:59.504 14:45:59 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:59.504 14:45:59 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:59.504 14:45:59 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:59.504 14:45:59 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:59.504 14:45:59 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:59.504 14:45:59 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:59.504 14:45:59 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:59.504 14:45:59 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:59.504 14:45:59 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:59.504 14:45:59 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:59.504 14:45:59 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:59.504 14:45:59 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:59.504 14:45:59 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:59.504 14:45:59 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:59.504 14:45:59 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:59.504 14:45:59 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:59.504 14:45:59 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:59.504 14:45:59 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:59.504 #define SPDK_CONFIG_H 00:08:59.504 #define SPDK_CONFIG_APPS 1 00:08:59.504 #define SPDK_CONFIG_ARCH native 00:08:59.504 #define SPDK_CONFIG_ASAN 1 00:08:59.504 #undef SPDK_CONFIG_AVAHI 00:08:59.504 #undef SPDK_CONFIG_CET 00:08:59.504 #define SPDK_CONFIG_COVERAGE 1 00:08:59.504 #define SPDK_CONFIG_CROSS_PREFIX 00:08:59.504 #undef SPDK_CONFIG_CRYPTO 00:08:59.504 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:59.504 #undef SPDK_CONFIG_CUSTOMOCF 00:08:59.504 #undef SPDK_CONFIG_DAOS 00:08:59.504 #define SPDK_CONFIG_DAOS_DIR 00:08:59.504 #define SPDK_CONFIG_DEBUG 1 00:08:59.504 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:59.504 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:59.504 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:59.504 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:59.504 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:59.504 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:59.504 #define SPDK_CONFIG_EXAMPLES 1 00:08:59.504 #undef SPDK_CONFIG_FC 00:08:59.504 #define SPDK_CONFIG_FC_PATH 00:08:59.504 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:59.504 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:59.504 #undef SPDK_CONFIG_FUSE 00:08:59.504 #undef SPDK_CONFIG_FUZZER 00:08:59.504 #define SPDK_CONFIG_FUZZER_LIB 00:08:59.504 #undef SPDK_CONFIG_GOLANG 00:08:59.504 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:59.504 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:59.504 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:59.504 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:59.504 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:59.504 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:59.504 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:59.504 #define SPDK_CONFIG_IDXD 1 00:08:59.504 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:59.504 #undef SPDK_CONFIG_IPSEC_MB 00:08:59.504 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:59.504 #define SPDK_CONFIG_ISAL 1 00:08:59.504 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:59.504 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:59.504 #define SPDK_CONFIG_LIBDIR 00:08:59.504 #undef SPDK_CONFIG_LTO 00:08:59.504 #define SPDK_CONFIG_MAX_LCORES 00:08:59.504 #define SPDK_CONFIG_NVME_CUSE 1 00:08:59.504 #undef SPDK_CONFIG_OCF 00:08:59.504 #define SPDK_CONFIG_OCF_PATH 00:08:59.504 #define SPDK_CONFIG_OPENSSL_PATH 00:08:59.504 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:59.504 #define SPDK_CONFIG_PGO_DIR 00:08:59.504 #undef SPDK_CONFIG_PGO_USE 00:08:59.504 #define SPDK_CONFIG_PREFIX /usr/local 00:08:59.504 #undef SPDK_CONFIG_RAID5F 00:08:59.504 #undef SPDK_CONFIG_RBD 00:08:59.504 #define SPDK_CONFIG_RDMA 1 00:08:59.504 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:59.504 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:59.504 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:59.504 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:59.504 #define SPDK_CONFIG_SHARED 1 00:08:59.504 #undef SPDK_CONFIG_SMA 00:08:59.504 #define SPDK_CONFIG_TESTS 1 00:08:59.504 #undef SPDK_CONFIG_TSAN 00:08:59.504 #define SPDK_CONFIG_UBLK 1 00:08:59.504 #define SPDK_CONFIG_UBSAN 1 00:08:59.504 #undef SPDK_CONFIG_UNIT_TESTS 00:08:59.504 #undef SPDK_CONFIG_URING 00:08:59.504 #define SPDK_CONFIG_URING_PATH 00:08:59.504 #undef SPDK_CONFIG_URING_ZNS 00:08:59.504 #undef SPDK_CONFIG_USDT 00:08:59.504 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:59.504 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:59.504 #undef SPDK_CONFIG_VFIO_USER 00:08:59.504 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:59.504 #define SPDK_CONFIG_VHOST 1 00:08:59.504 #define SPDK_CONFIG_VIRTIO 1 00:08:59.504 #undef SPDK_CONFIG_VTUNE 00:08:59.504 #define SPDK_CONFIG_VTUNE_DIR 00:08:59.504 #define SPDK_CONFIG_WERROR 1 00:08:59.504 #define SPDK_CONFIG_WPDK_DIR 00:08:59.504 #undef SPDK_CONFIG_XNVME 00:08:59.504 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:59.504 14:45:59 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:59.504 14:45:59 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:59.504 14:45:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.504 14:45:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.504 14:45:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.504 14:45:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 14:45:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 14:45:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 14:45:59 -- paths/export.sh@5 -- # export PATH 00:08:59.504 14:45:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.504 14:45:59 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:59.504 14:45:59 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:59.504 14:45:59 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:59.504 14:45:59 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:59.504 14:45:59 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:59.504 14:45:59 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:59.504 14:45:59 -- pm/common@67 -- # TEST_TAG=N/A 00:08:59.504 14:45:59 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:59.504 14:45:59 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:59.504 14:45:59 -- pm/common@71 -- # uname -s 00:08:59.504 14:45:59 -- pm/common@71 -- # PM_OS=Linux 00:08:59.504 14:45:59 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:59.504 14:45:59 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:59.504 14:45:59 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:59.504 14:45:59 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:59.504 14:45:59 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:59.504 14:45:59 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:59.504 14:45:59 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:59.504 14:45:59 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:59.504 14:45:59 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:59.504 14:45:59 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:59.504 14:45:59 -- common/autotest_common.sh@57 -- # : 0 00:08:59.504 14:45:59 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:59.505 14:45:59 -- common/autotest_common.sh@61 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:59.505 14:45:59 -- common/autotest_common.sh@63 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:59.505 14:45:59 -- common/autotest_common.sh@65 -- # : 1 00:08:59.505 14:45:59 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:59.505 14:45:59 -- common/autotest_common.sh@67 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:59.505 14:45:59 -- common/autotest_common.sh@69 -- # : 00:08:59.505 14:45:59 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:59.505 14:45:59 -- common/autotest_common.sh@71 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:59.505 14:45:59 -- common/autotest_common.sh@73 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:59.505 14:45:59 -- common/autotest_common.sh@75 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:59.505 14:45:59 -- common/autotest_common.sh@77 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:59.505 14:45:59 -- common/autotest_common.sh@79 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:59.505 14:45:59 -- common/autotest_common.sh@81 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:59.505 14:45:59 -- common/autotest_common.sh@83 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:59.505 14:45:59 -- common/autotest_common.sh@85 -- # : 1 00:08:59.505 14:45:59 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:59.505 14:45:59 -- common/autotest_common.sh@87 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:59.505 14:45:59 -- common/autotest_common.sh@89 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:59.505 14:45:59 -- common/autotest_common.sh@91 -- # : 1 00:08:59.505 14:45:59 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:59.505 14:45:59 -- common/autotest_common.sh@93 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:59.505 14:45:59 -- common/autotest_common.sh@95 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:59.505 14:45:59 -- common/autotest_common.sh@97 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:59.505 14:45:59 -- common/autotest_common.sh@99 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:59.505 14:45:59 -- common/autotest_common.sh@101 -- # : rdma 00:08:59.505 14:45:59 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:59.505 14:45:59 -- common/autotest_common.sh@103 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:59.505 14:45:59 -- common/autotest_common.sh@105 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:59.505 14:45:59 -- common/autotest_common.sh@107 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:59.505 14:45:59 -- common/autotest_common.sh@109 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:59.505 14:45:59 -- common/autotest_common.sh@111 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:59.505 14:45:59 -- common/autotest_common.sh@113 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:59.505 14:45:59 -- common/autotest_common.sh@115 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:59.505 14:45:59 -- common/autotest_common.sh@117 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:59.505 14:45:59 -- common/autotest_common.sh@119 -- # : 1 00:08:59.505 14:45:59 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:59.505 14:45:59 -- common/autotest_common.sh@121 -- # : 1 00:08:59.505 14:45:59 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:59.505 14:45:59 -- common/autotest_common.sh@123 -- # : 00:08:59.505 14:45:59 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:59.505 14:45:59 -- common/autotest_common.sh@125 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:59.505 14:45:59 -- common/autotest_common.sh@127 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:59.505 14:45:59 -- common/autotest_common.sh@129 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:59.505 14:45:59 -- common/autotest_common.sh@131 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:59.505 14:45:59 -- common/autotest_common.sh@133 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:59.505 14:45:59 -- common/autotest_common.sh@135 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:59.505 14:45:59 -- common/autotest_common.sh@137 -- # : 00:08:59.505 14:45:59 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:59.505 14:45:59 -- common/autotest_common.sh@139 -- # : true 00:08:59.505 14:45:59 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:59.505 14:45:59 -- common/autotest_common.sh@141 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:59.505 14:45:59 -- common/autotest_common.sh@143 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:59.505 14:45:59 -- common/autotest_common.sh@145 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:59.505 14:45:59 -- common/autotest_common.sh@147 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:59.505 14:45:59 -- common/autotest_common.sh@149 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:59.505 14:45:59 -- common/autotest_common.sh@151 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:59.505 14:45:59 -- common/autotest_common.sh@153 -- # : mlx5 00:08:59.505 14:45:59 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:59.505 14:45:59 -- common/autotest_common.sh@155 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:59.505 14:45:59 -- common/autotest_common.sh@157 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:59.505 14:45:59 -- common/autotest_common.sh@159 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:59.505 14:45:59 -- common/autotest_common.sh@161 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:59.505 14:45:59 -- common/autotest_common.sh@163 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:59.505 14:45:59 -- common/autotest_common.sh@166 -- # : 00:08:59.505 14:45:59 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:59.505 14:45:59 -- common/autotest_common.sh@168 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:59.505 14:45:59 -- common/autotest_common.sh@170 -- # : 0 00:08:59.505 14:45:59 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:59.505 14:45:59 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:59.505 14:45:59 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:59.505 14:45:59 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:59.505 14:45:59 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:59.505 14:45:59 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:59.505 14:45:59 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:59.505 14:45:59 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:59.505 14:45:59 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:59.506 14:45:59 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:59.506 14:45:59 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:59.506 14:45:59 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:59.506 14:45:59 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:59.506 14:45:59 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:59.506 14:45:59 -- common/autotest_common.sh@199 -- # cat 00:08:59.506 14:45:59 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:59.506 14:45:59 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:59.506 14:45:59 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:59.506 14:45:59 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:59.506 14:45:59 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:59.506 14:45:59 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:59.506 14:45:59 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:59.506 14:45:59 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:59.506 14:45:59 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:59.506 14:45:59 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:59.506 14:45:59 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:59.506 14:45:59 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:59.506 14:45:59 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:59.506 14:45:59 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:59.506 14:45:59 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:59.506 14:45:59 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:59.506 14:45:59 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:59.506 14:45:59 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:59.506 14:45:59 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:59.506 14:45:59 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:59.506 14:45:59 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:59.506 14:45:59 -- common/autotest_common.sh@252 -- # valgrind= 00:08:59.506 14:45:59 -- common/autotest_common.sh@258 -- # uname -s 00:08:59.506 14:45:59 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:59.506 14:45:59 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:59.506 14:45:59 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:59.506 14:45:59 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:59.506 14:45:59 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:59.506 14:45:59 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:08:59.506 14:45:59 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:59.506 14:45:59 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:59.506 14:45:59 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:59.506 14:45:59 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:59.506 14:45:59 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:59.506 14:45:59 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:59.506 14:45:59 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:08:59.506 14:45:59 -- common/autotest_common.sh@307 -- # [[ -z 135013 ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@307 -- # kill -0 135013 00:08:59.506 14:45:59 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:59.506 14:45:59 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:59.506 14:45:59 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:59.506 14:45:59 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:59.506 14:45:59 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:59.506 14:45:59 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:59.506 14:45:59 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:59.506 14:45:59 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.rj25hb 00:08:59.506 14:45:59 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:59.506 14:45:59 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rj25hb/tests/target /tmp/spdk.rj25hb 00:08:59.506 14:45:59 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@316 -- # df -T 00:08:59.506 14:45:59 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=56697901056 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=5296807936 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=30994739200 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390187008 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=8757248 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=30997049344 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=307200 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:08:59.506 14:45:59 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:08:59.506 14:45:59 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:59.506 14:45:59 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:59.506 14:45:59 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:59.506 * Looking for test storage... 00:08:59.506 14:45:59 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:59.506 14:45:59 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:59.506 14:45:59 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.506 14:45:59 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:59.506 14:45:59 -- common/autotest_common.sh@361 -- # mount=/ 00:08:59.506 14:45:59 -- common/autotest_common.sh@363 -- # target_space=56697901056 00:08:59.506 14:45:59 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:59.506 14:45:59 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:59.506 14:45:59 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:59.506 14:45:59 -- common/autotest_common.sh@370 -- # new_size=7511400448 00:08:59.506 14:45:59 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:59.506 14:45:59 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.506 14:45:59 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.506 14:45:59 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.506 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.506 14:45:59 -- common/autotest_common.sh@378 -- # return 0 00:08:59.506 14:45:59 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:59.506 14:45:59 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:59.506 14:45:59 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:59.506 14:45:59 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:59.506 14:45:59 -- common/autotest_common.sh@1673 -- # true 00:08:59.507 14:45:59 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:59.507 14:45:59 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:59.507 14:45:59 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:59.507 14:45:59 -- common/autotest_common.sh@27 -- # exec 00:08:59.507 14:45:59 -- common/autotest_common.sh@29 -- # exec 00:08:59.507 14:45:59 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:59.507 14:45:59 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:59.507 14:45:59 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:59.507 14:45:59 -- common/autotest_common.sh@18 -- # set -x 00:08:59.507 14:45:59 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.507 14:45:59 -- nvmf/common.sh@7 -- # uname -s 00:08:59.507 14:45:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.507 14:45:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.507 14:45:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.507 14:45:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.507 14:45:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.507 14:45:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.507 14:45:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.507 14:45:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.507 14:45:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.507 14:45:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.507 14:45:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:08:59.507 14:45:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:08:59.507 14:45:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.507 14:45:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.507 14:45:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.507 14:45:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.507 14:45:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:59.507 14:45:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.507 14:45:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.507 14:45:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.507 14:45:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.507 14:45:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.507 14:45:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.507 14:45:59 -- paths/export.sh@5 -- # export PATH 00:08:59.507 14:45:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.507 14:45:59 -- nvmf/common.sh@47 -- # : 0 00:08:59.507 14:45:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.507 14:45:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.507 14:45:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.507 14:45:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.507 14:45:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.507 14:45:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.507 14:45:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.507 14:45:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.507 14:45:59 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:59.507 14:45:59 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:59.507 14:45:59 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:59.507 14:45:59 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:59.507 14:45:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.507 14:45:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:59.507 14:45:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:59.507 14:45:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:59.507 14:45:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.507 14:45:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.507 14:45:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.507 14:45:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:59.507 14:45:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:59.507 14:45:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.507 14:45:59 -- common/autotest_common.sh@10 -- # set +x 00:09:02.038 14:46:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:02.038 14:46:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.038 14:46:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.038 14:46:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.038 14:46:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.038 14:46:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.038 14:46:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.038 14:46:01 -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.038 14:46:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.038 14:46:01 -- nvmf/common.sh@296 -- # e810=() 00:09:02.038 14:46:01 -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.038 14:46:01 -- nvmf/common.sh@297 -- # x722=() 00:09:02.038 14:46:01 -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.038 14:46:01 -- nvmf/common.sh@298 -- # mlx=() 00:09:02.038 14:46:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.038 14:46:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.038 14:46:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.038 14:46:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.038 14:46:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:02.038 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:02.038 14:46:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.038 14:46:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.038 14:46:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:02.038 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:02.038 14:46:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:02.038 14:46:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.038 14:46:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.038 14:46:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.038 14:46:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.038 14:46:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.038 14:46:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:02.038 Found net devices under 0000:09:00.0: mlx_0_0 00:09:02.038 14:46:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.038 14:46:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.038 14:46:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:02.038 14:46:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.038 14:46:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:02.038 Found net devices under 0000:09:00.1: mlx_0_1 00:09:02.038 14:46:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.038 14:46:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:02.038 14:46:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:02.038 14:46:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:02.038 14:46:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:02.038 14:46:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:02.038 14:46:01 -- nvmf/common.sh@58 -- # uname 00:09:02.038 14:46:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:02.038 14:46:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:02.038 14:46:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:02.038 14:46:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:02.038 14:46:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:02.038 14:46:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:02.038 14:46:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:02.038 14:46:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:02.038 14:46:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:02.038 14:46:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:02.038 14:46:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:02.038 14:46:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.038 14:46:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:02.039 14:46:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:02.039 14:46:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.039 14:46:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:02.039 14:46:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@105 -- # continue 2 00:09:02.039 14:46:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@105 -- # continue 2 00:09:02.039 14:46:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:02.039 14:46:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.039 14:46:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:02.039 14:46:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:02.039 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.039 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:09:02.039 altname enp9s0f0np0 00:09:02.039 inet 192.168.100.8/24 scope global mlx_0_0 00:09:02.039 valid_lft forever preferred_lft forever 00:09:02.039 14:46:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:02.039 14:46:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.039 14:46:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:02.039 14:46:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:02.039 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:02.039 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:09:02.039 altname enp9s0f1np1 00:09:02.039 inet 192.168.100.9/24 scope global mlx_0_1 00:09:02.039 valid_lft forever preferred_lft forever 00:09:02.039 14:46:01 -- nvmf/common.sh@411 -- # return 0 00:09:02.039 14:46:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:02.039 14:46:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:02.039 14:46:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:02.039 14:46:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:02.039 14:46:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:02.039 14:46:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:02.039 14:46:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:02.039 14:46:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:02.039 14:46:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:02.039 14:46:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@105 -- # continue 2 00:09:02.039 14:46:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:02.039 14:46:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:02.039 14:46:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@105 -- # continue 2 00:09:02.039 14:46:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:02.039 14:46:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.039 14:46:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:02.039 14:46:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:02.039 14:46:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:02.039 14:46:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:02.039 192.168.100.9' 00:09:02.039 14:46:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:02.039 192.168.100.9' 00:09:02.039 14:46:01 -- nvmf/common.sh@446 -- # head -n 1 00:09:02.039 14:46:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:02.039 14:46:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:02.039 192.168.100.9' 00:09:02.039 14:46:01 -- nvmf/common.sh@447 -- # tail -n +2 00:09:02.039 14:46:01 -- nvmf/common.sh@447 -- # head -n 1 00:09:02.039 14:46:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:02.039 14:46:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:02.039 14:46:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:02.039 14:46:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:02.039 14:46:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:02.039 14:46:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:02.039 14:46:01 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:02.039 14:46:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.039 14:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.039 14:46:01 -- common/autotest_common.sh@10 -- # set +x 00:09:02.039 ************************************ 00:09:02.039 START TEST nvmf_filesystem_no_in_capsule 00:09:02.039 ************************************ 00:09:02.039 14:46:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:09:02.039 14:46:01 -- target/filesystem.sh@47 -- # in_capsule=0 00:09:02.039 14:46:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:02.039 14:46:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:02.039 14:46:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:02.039 14:46:01 -- common/autotest_common.sh@10 -- # set +x 00:09:02.039 14:46:01 -- nvmf/common.sh@470 -- # nvmfpid=136682 00:09:02.039 14:46:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.039 14:46:01 -- nvmf/common.sh@471 -- # waitforlisten 136682 00:09:02.039 14:46:01 -- common/autotest_common.sh@817 -- # '[' -z 136682 ']' 00:09:02.039 14:46:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.039 14:46:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:02.039 14:46:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.039 14:46:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:02.039 14:46:01 -- common/autotest_common.sh@10 -- # set +x 00:09:02.039 [2024-04-26 14:46:01.917034] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:02.039 [2024-04-26 14:46:01.917211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.039 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.039 [2024-04-26 14:46:02.043187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.298 [2024-04-26 14:46:02.299765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.298 [2024-04-26 14:46:02.299844] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.298 [2024-04-26 14:46:02.299871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.298 [2024-04-26 14:46:02.299894] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.298 [2024-04-26 14:46:02.299912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.298 [2024-04-26 14:46:02.300035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.298 [2024-04-26 14:46:02.300087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.298 [2024-04-26 14:46:02.300144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.298 [2024-04-26 14:46:02.300154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.869 14:46:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:02.869 14:46:02 -- common/autotest_common.sh@850 -- # return 0 00:09:02.869 14:46:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:02.869 14:46:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:02.869 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:09:02.869 14:46:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.869 14:46:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:02.869 14:46:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:02.869 14:46:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.869 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:09:02.869 [2024-04-26 14:46:02.866612] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:02.869 [2024-04-26 14:46:02.892844] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7fc371f6a940) succeed. 00:09:02.869 [2024-04-26 14:46:02.903794] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7fc371f24940) succeed. 00:09:03.127 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.127 14:46:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:03.127 14:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.127 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.692 Malloc1 00:09:03.692 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.692 14:46:03 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.692 14:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.693 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.693 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.693 14:46:03 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:03.693 14:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.693 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.693 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.693 14:46:03 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:03.693 14:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.693 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.693 [2024-04-26 14:46:03.646788] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:03.693 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.693 14:46:03 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:03.693 14:46:03 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:09:03.693 14:46:03 -- common/autotest_common.sh@1365 -- # local bdev_info 00:09:03.693 14:46:03 -- common/autotest_common.sh@1366 -- # local bs 00:09:03.693 14:46:03 -- common/autotest_common.sh@1367 -- # local nb 00:09:03.693 14:46:03 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:03.693 14:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.693 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:09:03.693 14:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.693 14:46:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:09:03.693 { 00:09:03.693 "name": "Malloc1", 00:09:03.693 "aliases": [ 00:09:03.693 "9d8e9a7b-4915-4f9d-9a3b-dc65b31fbeb9" 00:09:03.693 ], 00:09:03.693 "product_name": "Malloc disk", 00:09:03.693 "block_size": 512, 00:09:03.693 "num_blocks": 1048576, 00:09:03.693 "uuid": "9d8e9a7b-4915-4f9d-9a3b-dc65b31fbeb9", 00:09:03.693 "assigned_rate_limits": { 00:09:03.693 "rw_ios_per_sec": 0, 00:09:03.693 "rw_mbytes_per_sec": 0, 00:09:03.693 "r_mbytes_per_sec": 0, 00:09:03.693 "w_mbytes_per_sec": 0 00:09:03.693 }, 00:09:03.693 "claimed": true, 00:09:03.693 "claim_type": "exclusive_write", 00:09:03.693 "zoned": false, 00:09:03.693 "supported_io_types": { 00:09:03.693 "read": true, 00:09:03.693 "write": true, 00:09:03.693 "unmap": true, 00:09:03.693 "write_zeroes": true, 00:09:03.693 "flush": true, 00:09:03.693 "reset": true, 00:09:03.693 "compare": false, 00:09:03.693 "compare_and_write": false, 00:09:03.693 "abort": true, 00:09:03.693 "nvme_admin": false, 00:09:03.693 "nvme_io": false 00:09:03.693 }, 00:09:03.693 "memory_domains": [ 00:09:03.693 { 00:09:03.693 "dma_device_id": "system", 00:09:03.693 "dma_device_type": 1 00:09:03.693 }, 00:09:03.693 { 00:09:03.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.693 "dma_device_type": 2 00:09:03.693 } 00:09:03.693 ], 00:09:03.693 "driver_specific": {} 00:09:03.693 } 00:09:03.693 ]' 00:09:03.693 14:46:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:09:03.693 14:46:03 -- common/autotest_common.sh@1369 -- # bs=512 00:09:03.693 14:46:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:09:03.693 14:46:03 -- common/autotest_common.sh@1370 -- # nb=1048576 00:09:03.693 14:46:03 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:09:03.693 14:46:03 -- common/autotest_common.sh@1374 -- # echo 512 00:09:03.693 14:46:03 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:03.693 14:46:03 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:07.876 14:46:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.876 14:46:07 -- common/autotest_common.sh@1184 -- # local i=0 00:09:07.876 14:46:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.876 14:46:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:07.876 14:46:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:09.247 14:46:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:09.247 14:46:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:09.247 14:46:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.247 14:46:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:09.247 14:46:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.247 14:46:09 -- common/autotest_common.sh@1194 -- # return 0 00:09:09.247 14:46:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:09.247 14:46:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:09.247 14:46:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:09.247 14:46:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:09.247 14:46:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:09.248 14:46:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:09.248 14:46:09 -- setup/common.sh@80 -- # echo 536870912 00:09:09.248 14:46:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:09.248 14:46:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:09.248 14:46:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:09.248 14:46:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:09.248 14:46:09 -- target/filesystem.sh@69 -- # partprobe 00:09:09.505 14:46:09 -- target/filesystem.sh@70 -- # sleep 1 00:09:10.437 14:46:10 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:10.437 14:46:10 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:10.437 14:46:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:10.437 14:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.437 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:09:10.694 ************************************ 00:09:10.694 START TEST filesystem_ext4 00:09:10.694 ************************************ 00:09:10.694 14:46:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:10.694 14:46:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:10.694 14:46:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.694 14:46:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:10.694 14:46:10 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:10.694 14:46:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:10.694 14:46:10 -- common/autotest_common.sh@914 -- # local i=0 00:09:10.694 14:46:10 -- common/autotest_common.sh@915 -- # local force 00:09:10.694 14:46:10 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:10.694 14:46:10 -- common/autotest_common.sh@918 -- # force=-F 00:09:10.694 14:46:10 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:10.694 mke2fs 1.46.5 (30-Dec-2021) 00:09:10.694 Discarding device blocks: 0/522240 done 00:09:10.694 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:10.694 Filesystem UUID: 3f71d777-4fb7-4535-bd06-bf1ad2f2fb9e 00:09:10.694 Superblock backups stored on blocks: 00:09:10.694 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:10.694 00:09:10.694 Allocating group tables: 0/64 done 00:09:10.694 Writing inode tables: 0/64 done 00:09:10.694 Creating journal (8192 blocks): done 00:09:10.694 Writing superblocks and filesystem accounting information: 0/64 done 00:09:10.694 00:09:10.694 14:46:10 -- common/autotest_common.sh@931 -- # return 0 00:09:10.694 14:46:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.694 14:46:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.694 14:46:10 -- target/filesystem.sh@25 -- # sync 00:09:10.694 14:46:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.694 14:46:10 -- target/filesystem.sh@27 -- # sync 00:09:10.694 14:46:10 -- target/filesystem.sh@29 -- # i=0 00:09:10.694 14:46:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.694 14:46:10 -- target/filesystem.sh@37 -- # kill -0 136682 00:09:10.694 14:46:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.694 14:46:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.694 14:46:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.694 14:46:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.694 00:09:10.694 real 0m0.161s 00:09:10.694 user 0m0.012s 00:09:10.694 sys 0m0.027s 00:09:10.694 14:46:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:10.694 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:09:10.694 ************************************ 00:09:10.694 END TEST filesystem_ext4 00:09:10.694 ************************************ 00:09:10.694 14:46:10 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:10.694 14:46:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:10.694 14:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.694 14:46:10 -- common/autotest_common.sh@10 -- # set +x 00:09:10.952 ************************************ 00:09:10.952 START TEST filesystem_btrfs 00:09:10.952 ************************************ 00:09:10.952 14:46:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:10.952 14:46:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:10.952 14:46:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.952 14:46:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:10.952 14:46:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:10.952 14:46:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:10.952 14:46:10 -- common/autotest_common.sh@914 -- # local i=0 00:09:10.952 14:46:10 -- common/autotest_common.sh@915 -- # local force 00:09:10.952 14:46:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:10.952 14:46:10 -- common/autotest_common.sh@920 -- # force=-f 00:09:10.952 14:46:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:11.210 btrfs-progs v6.6.2 00:09:11.210 See https://btrfs.readthedocs.io for more information. 00:09:11.210 00:09:11.210 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:11.210 NOTE: several default settings have changed in version 5.15, please make sure 00:09:11.210 this does not affect your deployments: 00:09:11.210 - DUP for metadata (-m dup) 00:09:11.210 - enabled no-holes (-O no-holes) 00:09:11.210 - enabled free-space-tree (-R free-space-tree) 00:09:11.210 00:09:11.210 Label: (null) 00:09:11.210 UUID: 9a16385c-95fa-49ed-b503-ef83d286b67a 00:09:11.210 Node size: 16384 00:09:11.210 Sector size: 4096 00:09:11.210 Filesystem size: 510.00MiB 00:09:11.210 Block group profiles: 00:09:11.210 Data: single 8.00MiB 00:09:11.210 Metadata: DUP 32.00MiB 00:09:11.210 System: DUP 8.00MiB 00:09:11.210 SSD detected: yes 00:09:11.210 Zoned device: no 00:09:11.210 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:11.210 Runtime features: free-space-tree 00:09:11.210 Checksum: crc32c 00:09:11.210 Number of devices: 1 00:09:11.210 Devices: 00:09:11.210 ID SIZE PATH 00:09:11.210 1 510.00MiB /dev/nvme0n1p1 00:09:11.210 00:09:11.210 14:46:11 -- common/autotest_common.sh@931 -- # return 0 00:09:11.210 14:46:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:11.210 14:46:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:11.210 14:46:11 -- target/filesystem.sh@25 -- # sync 00:09:11.210 14:46:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:11.210 14:46:11 -- target/filesystem.sh@27 -- # sync 00:09:11.210 14:46:11 -- target/filesystem.sh@29 -- # i=0 00:09:11.210 14:46:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:11.210 14:46:11 -- target/filesystem.sh@37 -- # kill -0 136682 00:09:11.210 14:46:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:11.210 14:46:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:11.210 14:46:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:11.210 14:46:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:11.210 00:09:11.210 real 0m0.213s 00:09:11.210 user 0m0.015s 00:09:11.210 sys 0m0.071s 00:09:11.210 14:46:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:11.210 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:09:11.210 ************************************ 00:09:11.210 END TEST filesystem_btrfs 00:09:11.210 ************************************ 00:09:11.210 14:46:11 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:11.210 14:46:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:11.210 14:46:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.210 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:09:11.210 ************************************ 00:09:11.210 START TEST filesystem_xfs 00:09:11.210 ************************************ 00:09:11.210 14:46:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:09:11.210 14:46:11 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:11.210 14:46:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:11.210 14:46:11 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:11.210 14:46:11 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:11.210 14:46:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:11.210 14:46:11 -- common/autotest_common.sh@914 -- # local i=0 00:09:11.210 14:46:11 -- common/autotest_common.sh@915 -- # local force 00:09:11.210 14:46:11 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:11.210 14:46:11 -- common/autotest_common.sh@920 -- # force=-f 00:09:11.210 14:46:11 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:11.467 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:11.467 = sectsz=512 attr=2, projid32bit=1 00:09:11.467 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:11.467 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:11.467 data = bsize=4096 blocks=130560, imaxpct=25 00:09:11.467 = sunit=0 swidth=0 blks 00:09:11.467 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:11.467 log =internal log bsize=4096 blocks=16384, version=2 00:09:11.467 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:11.467 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:11.467 Discarding blocks...Done. 00:09:11.467 14:46:11 -- common/autotest_common.sh@931 -- # return 0 00:09:11.467 14:46:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:12.031 14:46:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:12.031 14:46:11 -- target/filesystem.sh@25 -- # sync 00:09:12.032 14:46:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:12.032 14:46:11 -- target/filesystem.sh@27 -- # sync 00:09:12.032 14:46:11 -- target/filesystem.sh@29 -- # i=0 00:09:12.032 14:46:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:12.032 14:46:11 -- target/filesystem.sh@37 -- # kill -0 136682 00:09:12.032 14:46:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:12.032 14:46:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:12.032 14:46:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:12.032 14:46:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:12.032 00:09:12.032 real 0m0.702s 00:09:12.032 user 0m0.005s 00:09:12.032 sys 0m0.057s 00:09:12.032 14:46:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:12.032 14:46:11 -- common/autotest_common.sh@10 -- # set +x 00:09:12.032 ************************************ 00:09:12.032 END TEST filesystem_xfs 00:09:12.032 ************************************ 00:09:12.032 14:46:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:12.032 14:46:11 -- target/filesystem.sh@93 -- # sync 00:09:12.032 14:46:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.555 14:46:14 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.555 14:46:14 -- common/autotest_common.sh@1205 -- # local i=0 00:09:14.555 14:46:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:14.555 14:46:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.555 14:46:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:14.555 14:46:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.555 14:46:14 -- common/autotest_common.sh@1217 -- # return 0 00:09:14.555 14:46:14 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.555 14:46:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.555 14:46:14 -- common/autotest_common.sh@10 -- # set +x 00:09:14.555 14:46:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.555 14:46:14 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:14.555 14:46:14 -- target/filesystem.sh@101 -- # killprocess 136682 00:09:14.555 14:46:14 -- common/autotest_common.sh@936 -- # '[' -z 136682 ']' 00:09:14.555 14:46:14 -- common/autotest_common.sh@940 -- # kill -0 136682 00:09:14.555 14:46:14 -- common/autotest_common.sh@941 -- # uname 00:09:14.555 14:46:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.555 14:46:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136682 00:09:14.555 14:46:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:14.555 14:46:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:14.555 14:46:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136682' 00:09:14.555 killing process with pid 136682 00:09:14.555 14:46:14 -- common/autotest_common.sh@955 -- # kill 136682 00:09:14.555 14:46:14 -- common/autotest_common.sh@960 -- # wait 136682 00:09:14.555 [2024-04-26 14:46:14.522134] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:17.838 14:46:17 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:17.838 00:09:17.838 real 0m15.494s 00:09:17.838 user 0m58.044s 00:09:17.838 sys 0m1.351s 00:09:17.838 14:46:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:17.838 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:17.838 ************************************ 00:09:17.838 END TEST nvmf_filesystem_no_in_capsule 00:09:17.838 ************************************ 00:09:17.838 14:46:17 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:17.838 14:46:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:17.838 14:46:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.838 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:17.838 ************************************ 00:09:17.838 START TEST nvmf_filesystem_in_capsule 00:09:17.838 ************************************ 00:09:17.838 14:46:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:09:17.838 14:46:17 -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:17.838 14:46:17 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:17.838 14:46:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:17.838 14:46:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:17.838 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:17.838 14:46:17 -- nvmf/common.sh@470 -- # nvmfpid=138750 00:09:17.838 14:46:17 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.838 14:46:17 -- nvmf/common.sh@471 -- # waitforlisten 138750 00:09:17.838 14:46:17 -- common/autotest_common.sh@817 -- # '[' -z 138750 ']' 00:09:17.838 14:46:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.838 14:46:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:17.838 14:46:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.838 14:46:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:17.838 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:17.838 [2024-04-26 14:46:17.537776] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:17.838 [2024-04-26 14:46:17.537911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.838 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.838 [2024-04-26 14:46:17.670859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.096 [2024-04-26 14:46:17.928097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.096 [2024-04-26 14:46:17.928189] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.096 [2024-04-26 14:46:17.928218] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.096 [2024-04-26 14:46:17.928242] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.096 [2024-04-26 14:46:17.928262] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.096 [2024-04-26 14:46:17.928365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.096 [2024-04-26 14:46:17.928440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.096 [2024-04-26 14:46:17.928478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.096 [2024-04-26 14:46:17.928485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.662 14:46:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:18.662 14:46:18 -- common/autotest_common.sh@850 -- # return 0 00:09:18.662 14:46:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:18.662 14:46:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:18.662 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:09:18.662 14:46:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.662 14:46:18 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:18.662 14:46:18 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:18.662 14:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.662 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:09:18.662 [2024-04-26 14:46:18.535701] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f70d781b940) succeed. 00:09:18.662 [2024-04-26 14:46:18.547349] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f70d77d7940) succeed. 00:09:18.919 14:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.919 14:46:18 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:18.919 14:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.919 14:46:18 -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 Malloc1 00:09:19.485 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.485 14:46:19 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.485 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.485 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.485 14:46:19 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.485 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.485 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.485 14:46:19 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.485 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.485 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 [2024-04-26 14:46:19.412793] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.485 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.485 14:46:19 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:19.485 14:46:19 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:09:19.485 14:46:19 -- common/autotest_common.sh@1365 -- # local bdev_info 00:09:19.485 14:46:19 -- common/autotest_common.sh@1366 -- # local bs 00:09:19.485 14:46:19 -- common/autotest_common.sh@1367 -- # local nb 00:09:19.485 14:46:19 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:19.485 14:46:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:19.485 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 14:46:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:19.485 14:46:19 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:09:19.485 { 00:09:19.485 "name": "Malloc1", 00:09:19.485 "aliases": [ 00:09:19.485 "46692dc0-922d-4bef-94e2-c80d2ae940a2" 00:09:19.485 ], 00:09:19.485 "product_name": "Malloc disk", 00:09:19.485 "block_size": 512, 00:09:19.485 "num_blocks": 1048576, 00:09:19.485 "uuid": "46692dc0-922d-4bef-94e2-c80d2ae940a2", 00:09:19.485 "assigned_rate_limits": { 00:09:19.485 "rw_ios_per_sec": 0, 00:09:19.485 "rw_mbytes_per_sec": 0, 00:09:19.485 "r_mbytes_per_sec": 0, 00:09:19.485 "w_mbytes_per_sec": 0 00:09:19.485 }, 00:09:19.485 "claimed": true, 00:09:19.485 "claim_type": "exclusive_write", 00:09:19.485 "zoned": false, 00:09:19.485 "supported_io_types": { 00:09:19.485 "read": true, 00:09:19.485 "write": true, 00:09:19.485 "unmap": true, 00:09:19.485 "write_zeroes": true, 00:09:19.485 "flush": true, 00:09:19.485 "reset": true, 00:09:19.485 "compare": false, 00:09:19.485 "compare_and_write": false, 00:09:19.485 "abort": true, 00:09:19.485 "nvme_admin": false, 00:09:19.485 "nvme_io": false 00:09:19.485 }, 00:09:19.485 "memory_domains": [ 00:09:19.485 { 00:09:19.485 "dma_device_id": "system", 00:09:19.485 "dma_device_type": 1 00:09:19.485 }, 00:09:19.485 { 00:09:19.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.485 "dma_device_type": 2 00:09:19.485 } 00:09:19.485 ], 00:09:19.485 "driver_specific": {} 00:09:19.485 } 00:09:19.485 ]' 00:09:19.485 14:46:19 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:09:19.485 14:46:19 -- common/autotest_common.sh@1369 -- # bs=512 00:09:19.485 14:46:19 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:09:19.485 14:46:19 -- common/autotest_common.sh@1370 -- # nb=1048576 00:09:19.485 14:46:19 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:09:19.485 14:46:19 -- common/autotest_common.sh@1374 -- # echo 512 00:09:19.485 14:46:19 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:19.485 14:46:19 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:23.666 14:46:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.666 14:46:22 -- common/autotest_common.sh@1184 -- # local i=0 00:09:23.666 14:46:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.666 14:46:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:23.666 14:46:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:25.035 14:46:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:25.035 14:46:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:25.035 14:46:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.035 14:46:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:25.035 14:46:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.035 14:46:24 -- common/autotest_common.sh@1194 -- # return 0 00:09:25.035 14:46:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:25.035 14:46:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:25.035 14:46:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:25.035 14:46:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:25.035 14:46:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:25.035 14:46:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:25.035 14:46:24 -- setup/common.sh@80 -- # echo 536870912 00:09:25.035 14:46:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:25.035 14:46:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:25.035 14:46:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:25.035 14:46:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:25.035 14:46:24 -- target/filesystem.sh@69 -- # partprobe 00:09:25.292 14:46:25 -- target/filesystem.sh@70 -- # sleep 1 00:09:26.223 14:46:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:26.223 14:46:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:26.223 14:46:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:26.223 14:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.223 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:09:26.481 ************************************ 00:09:26.481 START TEST filesystem_in_capsule_ext4 00:09:26.481 ************************************ 00:09:26.481 14:46:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:26.481 14:46:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:26.481 14:46:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.481 14:46:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:26.481 14:46:26 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:26.481 14:46:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:26.481 14:46:26 -- common/autotest_common.sh@914 -- # local i=0 00:09:26.481 14:46:26 -- common/autotest_common.sh@915 -- # local force 00:09:26.481 14:46:26 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:26.481 14:46:26 -- common/autotest_common.sh@918 -- # force=-F 00:09:26.481 14:46:26 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:26.481 mke2fs 1.46.5 (30-Dec-2021) 00:09:26.481 Discarding device blocks: 0/522240 done 00:09:26.481 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:26.481 Filesystem UUID: 157982f2-50fd-4f51-8da3-1f76ecac7e52 00:09:26.481 Superblock backups stored on blocks: 00:09:26.481 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:26.481 00:09:26.481 Allocating group tables: 0/64 done 00:09:26.481 Writing inode tables: 0/64 done 00:09:26.481 Creating journal (8192 blocks): done 00:09:26.481 Writing superblocks and filesystem accounting information: 0/64 done 00:09:26.481 00:09:26.481 14:46:26 -- common/autotest_common.sh@931 -- # return 0 00:09:26.481 14:46:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:26.481 14:46:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:26.481 14:46:26 -- target/filesystem.sh@25 -- # sync 00:09:26.481 14:46:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:26.481 14:46:26 -- target/filesystem.sh@27 -- # sync 00:09:26.481 14:46:26 -- target/filesystem.sh@29 -- # i=0 00:09:26.481 14:46:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:26.481 14:46:26 -- target/filesystem.sh@37 -- # kill -0 138750 00:09:26.481 14:46:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:26.481 14:46:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:26.481 14:46:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:26.481 14:46:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:26.481 00:09:26.481 real 0m0.174s 00:09:26.481 user 0m0.014s 00:09:26.481 sys 0m0.030s 00:09:26.481 14:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.481 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:09:26.481 ************************************ 00:09:26.481 END TEST filesystem_in_capsule_ext4 00:09:26.481 ************************************ 00:09:26.481 14:46:26 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:26.481 14:46:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:26.481 14:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.481 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:09:26.739 ************************************ 00:09:26.739 START TEST filesystem_in_capsule_btrfs 00:09:26.739 ************************************ 00:09:26.739 14:46:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:26.739 14:46:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:26.739 14:46:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.739 14:46:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:26.739 14:46:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:26.739 14:46:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:26.739 14:46:26 -- common/autotest_common.sh@914 -- # local i=0 00:09:26.739 14:46:26 -- common/autotest_common.sh@915 -- # local force 00:09:26.739 14:46:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:26.739 14:46:26 -- common/autotest_common.sh@920 -- # force=-f 00:09:26.739 14:46:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:26.739 btrfs-progs v6.6.2 00:09:26.739 See https://btrfs.readthedocs.io for more information. 00:09:26.739 00:09:26.739 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:26.739 NOTE: several default settings have changed in version 5.15, please make sure 00:09:26.739 this does not affect your deployments: 00:09:26.739 - DUP for metadata (-m dup) 00:09:26.739 - enabled no-holes (-O no-holes) 00:09:26.739 - enabled free-space-tree (-R free-space-tree) 00:09:26.739 00:09:26.739 Label: (null) 00:09:26.739 UUID: 9e13e8eb-7951-4d59-a5c0-b7c9debec81c 00:09:26.739 Node size: 16384 00:09:26.739 Sector size: 4096 00:09:26.739 Filesystem size: 510.00MiB 00:09:26.739 Block group profiles: 00:09:26.739 Data: single 8.00MiB 00:09:26.739 Metadata: DUP 32.00MiB 00:09:26.739 System: DUP 8.00MiB 00:09:26.739 SSD detected: yes 00:09:26.739 Zoned device: no 00:09:26.739 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:26.739 Runtime features: free-space-tree 00:09:26.739 Checksum: crc32c 00:09:26.739 Number of devices: 1 00:09:26.739 Devices: 00:09:26.739 ID SIZE PATH 00:09:26.739 1 510.00MiB /dev/nvme0n1p1 00:09:26.739 00:09:26.739 14:46:26 -- common/autotest_common.sh@931 -- # return 0 00:09:26.739 14:46:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:26.739 14:46:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:26.739 14:46:26 -- target/filesystem.sh@25 -- # sync 00:09:26.739 14:46:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:26.739 14:46:26 -- target/filesystem.sh@27 -- # sync 00:09:26.739 14:46:26 -- target/filesystem.sh@29 -- # i=0 00:09:26.739 14:46:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:26.739 14:46:26 -- target/filesystem.sh@37 -- # kill -0 138750 00:09:26.739 14:46:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:26.739 14:46:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:26.739 14:46:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:26.739 14:46:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:26.739 00:09:26.739 real 0m0.162s 00:09:26.739 user 0m0.011s 00:09:26.739 sys 0m0.034s 00:09:26.739 14:46:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.739 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:09:26.739 ************************************ 00:09:26.739 END TEST filesystem_in_capsule_btrfs 00:09:26.739 ************************************ 00:09:26.739 14:46:26 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:26.740 14:46:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:26.740 14:46:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.740 14:46:26 -- common/autotest_common.sh@10 -- # set +x 00:09:26.998 ************************************ 00:09:26.998 START TEST filesystem_in_capsule_xfs 00:09:26.998 ************************************ 00:09:26.998 14:46:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:09:26.998 14:46:26 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:26.998 14:46:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.998 14:46:26 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:26.998 14:46:26 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:26.998 14:46:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:26.998 14:46:26 -- common/autotest_common.sh@914 -- # local i=0 00:09:26.998 14:46:26 -- common/autotest_common.sh@915 -- # local force 00:09:26.998 14:46:26 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:26.998 14:46:26 -- common/autotest_common.sh@920 -- # force=-f 00:09:26.998 14:46:26 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:26.998 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:26.998 = sectsz=512 attr=2, projid32bit=1 00:09:26.998 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:26.998 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:26.998 data = bsize=4096 blocks=130560, imaxpct=25 00:09:26.998 = sunit=0 swidth=0 blks 00:09:26.998 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:26.998 log =internal log bsize=4096 blocks=16384, version=2 00:09:26.998 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:26.998 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:26.998 Discarding blocks...Done. 00:09:26.998 14:46:27 -- common/autotest_common.sh@931 -- # return 0 00:09:26.998 14:46:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:26.998 14:46:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:26.998 14:46:27 -- target/filesystem.sh@25 -- # sync 00:09:26.998 14:46:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:26.998 14:46:27 -- target/filesystem.sh@27 -- # sync 00:09:26.998 14:46:27 -- target/filesystem.sh@29 -- # i=0 00:09:26.998 14:46:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:26.998 14:46:27 -- target/filesystem.sh@37 -- # kill -0 138750 00:09:26.998 14:46:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:26.998 14:46:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:26.998 14:46:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:26.999 14:46:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:26.999 00:09:26.999 real 0m0.194s 00:09:26.999 user 0m0.007s 00:09:26.999 sys 0m0.032s 00:09:26.999 14:46:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.999 14:46:27 -- common/autotest_common.sh@10 -- # set +x 00:09:26.999 ************************************ 00:09:26.999 END TEST filesystem_in_capsule_xfs 00:09:26.999 ************************************ 00:09:26.999 14:46:27 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:27.256 14:46:27 -- target/filesystem.sh@93 -- # sync 00:09:27.256 14:46:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.785 14:46:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.785 14:46:29 -- common/autotest_common.sh@1205 -- # local i=0 00:09:29.785 14:46:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:29.785 14:46:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.785 14:46:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:29.785 14:46:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.785 14:46:29 -- common/autotest_common.sh@1217 -- # return 0 00:09:29.785 14:46:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.785 14:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.785 14:46:29 -- common/autotest_common.sh@10 -- # set +x 00:09:29.785 14:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.785 14:46:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:29.785 14:46:29 -- target/filesystem.sh@101 -- # killprocess 138750 00:09:29.785 14:46:29 -- common/autotest_common.sh@936 -- # '[' -z 138750 ']' 00:09:29.785 14:46:29 -- common/autotest_common.sh@940 -- # kill -0 138750 00:09:29.785 14:46:29 -- common/autotest_common.sh@941 -- # uname 00:09:29.785 14:46:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:29.785 14:46:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138750 00:09:29.785 14:46:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:29.785 14:46:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:29.785 14:46:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138750' 00:09:29.785 killing process with pid 138750 00:09:29.785 14:46:29 -- common/autotest_common.sh@955 -- # kill 138750 00:09:29.785 14:46:29 -- common/autotest_common.sh@960 -- # wait 138750 00:09:30.044 [2024-04-26 14:46:29.985587] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:33.335 14:46:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:33.335 00:09:33.335 real 0m15.345s 00:09:33.335 user 0m56.858s 00:09:33.335 sys 0m1.307s 00:09:33.335 14:46:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.335 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:09:33.336 ************************************ 00:09:33.336 END TEST nvmf_filesystem_in_capsule 00:09:33.336 ************************************ 00:09:33.336 14:46:32 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:33.336 14:46:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:33.336 14:46:32 -- nvmf/common.sh@117 -- # sync 00:09:33.336 14:46:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:33.336 14:46:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:33.336 14:46:32 -- nvmf/common.sh@120 -- # set +e 00:09:33.336 14:46:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.336 14:46:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:33.336 rmmod nvme_rdma 00:09:33.336 rmmod nvme_fabrics 00:09:33.336 14:46:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.336 14:46:32 -- nvmf/common.sh@124 -- # set -e 00:09:33.336 14:46:32 -- nvmf/common.sh@125 -- # return 0 00:09:33.336 14:46:32 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:09:33.336 14:46:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:33.336 14:46:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:33.336 00:09:33.336 real 0m33.475s 00:09:33.336 user 1m55.944s 00:09:33.336 sys 0m4.314s 00:09:33.336 14:46:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.336 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:09:33.336 ************************************ 00:09:33.336 END TEST nvmf_filesystem 00:09:33.336 ************************************ 00:09:33.336 14:46:32 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:33.336 14:46:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:33.336 14:46:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.336 14:46:32 -- common/autotest_common.sh@10 -- # set +x 00:09:33.336 ************************************ 00:09:33.336 START TEST nvmf_discovery 00:09:33.336 ************************************ 00:09:33.336 14:46:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:33.336 * Looking for test storage... 00:09:33.336 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:33.336 14:46:33 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.336 14:46:33 -- nvmf/common.sh@7 -- # uname -s 00:09:33.336 14:46:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.336 14:46:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.336 14:46:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.336 14:46:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.336 14:46:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.336 14:46:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.336 14:46:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.336 14:46:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.336 14:46:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.336 14:46:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.336 14:46:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:33.336 14:46:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:33.336 14:46:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.336 14:46:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.336 14:46:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.336 14:46:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.336 14:46:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.336 14:46:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.336 14:46:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.336 14:46:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.336 14:46:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.336 14:46:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.336 14:46:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.336 14:46:33 -- paths/export.sh@5 -- # export PATH 00:09:33.336 14:46:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.336 14:46:33 -- nvmf/common.sh@47 -- # : 0 00:09:33.336 14:46:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.336 14:46:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.336 14:46:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.336 14:46:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.336 14:46:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.336 14:46:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.336 14:46:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.336 14:46:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.336 14:46:33 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:33.336 14:46:33 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:33.336 14:46:33 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:33.336 14:46:33 -- target/discovery.sh@15 -- # hash nvme 00:09:33.336 14:46:33 -- target/discovery.sh@20 -- # nvmftestinit 00:09:33.336 14:46:33 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:33.336 14:46:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.336 14:46:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:33.336 14:46:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:33.336 14:46:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:33.336 14:46:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.336 14:46:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.336 14:46:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.336 14:46:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:33.336 14:46:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:33.336 14:46:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.336 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:09:35.241 14:46:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:35.241 14:46:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.241 14:46:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.241 14:46:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.241 14:46:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.241 14:46:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.241 14:46:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.241 14:46:34 -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.241 14:46:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.241 14:46:34 -- nvmf/common.sh@296 -- # e810=() 00:09:35.241 14:46:34 -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.241 14:46:34 -- nvmf/common.sh@297 -- # x722=() 00:09:35.241 14:46:34 -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.241 14:46:34 -- nvmf/common.sh@298 -- # mlx=() 00:09:35.241 14:46:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.241 14:46:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.241 14:46:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.241 14:46:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:35.241 14:46:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:35.241 14:46:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:35.241 14:46:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.241 14:46:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.241 14:46:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:35.241 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:35.241 14:46:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.241 14:46:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.241 14:46:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:35.241 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:35.241 14:46:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.241 14:46:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.241 14:46:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:35.241 14:46:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.242 14:46:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.242 14:46:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.242 14:46:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.242 14:46:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:35.242 Found net devices under 0000:09:00.0: mlx_0_0 00:09:35.242 14:46:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.242 14:46:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.242 14:46:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.242 14:46:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.242 14:46:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.242 14:46:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:35.242 Found net devices under 0000:09:00.1: mlx_0_1 00:09:35.242 14:46:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.242 14:46:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:35.242 14:46:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:35.242 14:46:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:35.242 14:46:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:35.242 14:46:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:35.242 14:46:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:35.242 14:46:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:35.242 14:46:34 -- nvmf/common.sh@58 -- # uname 00:09:35.242 14:46:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:35.242 14:46:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:35.242 14:46:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:35.242 14:46:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:35.242 14:46:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:35.242 14:46:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:35.242 14:46:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:35.242 14:46:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:35.242 14:46:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:35.242 14:46:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:35.242 14:46:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:35.242 14:46:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.242 14:46:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.242 14:46:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.242 14:46:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.242 14:46:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.242 14:46:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@105 -- # continue 2 00:09:35.242 14:46:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@105 -- # continue 2 00:09:35.242 14:46:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.242 14:46:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.242 14:46:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:35.242 14:46:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:35.242 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.242 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:09:35.242 altname enp9s0f0np0 00:09:35.242 inet 192.168.100.8/24 scope global mlx_0_0 00:09:35.242 valid_lft forever preferred_lft forever 00:09:35.242 14:46:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:35.242 14:46:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.242 14:46:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:35.242 14:46:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:35.242 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:35.242 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:09:35.242 altname enp9s0f1np1 00:09:35.242 inet 192.168.100.9/24 scope global mlx_0_1 00:09:35.242 valid_lft forever preferred_lft forever 00:09:35.242 14:46:35 -- nvmf/common.sh@411 -- # return 0 00:09:35.242 14:46:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:35.242 14:46:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.242 14:46:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:35.242 14:46:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:35.242 14:46:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:35.242 14:46:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:35.242 14:46:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:35.242 14:46:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.242 14:46:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:35.242 14:46:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@105 -- # continue 2 00:09:35.242 14:46:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.242 14:46:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.242 14:46:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@105 -- # continue 2 00:09:35.242 14:46:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.242 14:46:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.242 14:46:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:35.242 14:46:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:35.242 14:46:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:35.242 14:46:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:35.242 192.168.100.9' 00:09:35.242 14:46:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:35.242 192.168.100.9' 00:09:35.242 14:46:35 -- nvmf/common.sh@446 -- # head -n 1 00:09:35.242 14:46:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:35.242 14:46:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:35.242 192.168.100.9' 00:09:35.242 14:46:35 -- nvmf/common.sh@447 -- # tail -n +2 00:09:35.242 14:46:35 -- nvmf/common.sh@447 -- # head -n 1 00:09:35.242 14:46:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:35.242 14:46:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:35.242 14:46:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:35.242 14:46:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:35.242 14:46:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:35.242 14:46:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:35.242 14:46:35 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:35.242 14:46:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:35.242 14:46:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:35.242 14:46:35 -- common/autotest_common.sh@10 -- # set +x 00:09:35.242 14:46:35 -- nvmf/common.sh@470 -- # nvmfpid=142440 00:09:35.242 14:46:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.242 14:46:35 -- nvmf/common.sh@471 -- # waitforlisten 142440 00:09:35.242 14:46:35 -- common/autotest_common.sh@817 -- # '[' -z 142440 ']' 00:09:35.242 14:46:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.242 14:46:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.242 14:46:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.242 14:46:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.242 14:46:35 -- common/autotest_common.sh@10 -- # set +x 00:09:35.242 [2024-04-26 14:46:35.178021] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:35.242 [2024-04-26 14:46:35.178184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.242 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.242 [2024-04-26 14:46:35.301395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.500 [2024-04-26 14:46:35.553916] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.500 [2024-04-26 14:46:35.553993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.500 [2024-04-26 14:46:35.554021] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.500 [2024-04-26 14:46:35.554043] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.500 [2024-04-26 14:46:35.554062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.500 [2024-04-26 14:46:35.554214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.500 [2024-04-26 14:46:35.554242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.500 [2024-04-26 14:46:35.554324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.500 [2024-04-26 14:46:35.554329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.066 14:46:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:36.066 14:46:36 -- common/autotest_common.sh@850 -- # return 0 00:09:36.066 14:46:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:36.066 14:46:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:36.066 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.066 14:46:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.066 14:46:36 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:36.066 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.066 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.324 [2024-04-26 14:46:36.154060] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f210f27c940) succeed. 00:09:36.324 [2024-04-26 14:46:36.165006] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f210f236940) succeed. 00:09:36.581 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.581 14:46:36 -- target/discovery.sh@26 -- # seq 1 4 00:09:36.581 14:46:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:36.581 14:46:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:36.581 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.581 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.581 Null1 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 [2024-04-26 14:46:36.514881] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:36.582 14:46:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 Null2 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:36.582 14:46:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 Null3 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:36.582 14:46:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 Null4 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:36.582 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.582 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.582 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.582 14:46:36 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 4420 00:09:36.840 00:09:36.840 Discovery Log Number of Records 6, Generation counter 6 00:09:36.840 =====Discovery Log Entry 0====== 00:09:36.840 trtype: rdma 00:09:36.840 adrfam: ipv4 00:09:36.840 subtype: current discovery subsystem 00:09:36.840 treq: not required 00:09:36.840 portid: 0 00:09:36.840 trsvcid: 4420 00:09:36.840 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:36.840 traddr: 192.168.100.8 00:09:36.840 eflags: explicit discovery connections, duplicate discovery information 00:09:36.840 rdma_prtype: not specified 00:09:36.840 rdma_qptype: connected 00:09:36.840 rdma_cms: rdma-cm 00:09:36.840 rdma_pkey: 0x0000 00:09:36.840 =====Discovery Log Entry 1====== 00:09:36.840 trtype: rdma 00:09:36.840 adrfam: ipv4 00:09:36.840 subtype: nvme subsystem 00:09:36.840 treq: not required 00:09:36.840 portid: 0 00:09:36.840 trsvcid: 4420 00:09:36.840 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:36.840 traddr: 192.168.100.8 00:09:36.840 eflags: none 00:09:36.840 rdma_prtype: not specified 00:09:36.840 rdma_qptype: connected 00:09:36.840 rdma_cms: rdma-cm 00:09:36.840 rdma_pkey: 0x0000 00:09:36.840 =====Discovery Log Entry 2====== 00:09:36.840 trtype: rdma 00:09:36.840 adrfam: ipv4 00:09:36.840 subtype: nvme subsystem 00:09:36.841 treq: not required 00:09:36.841 portid: 0 00:09:36.841 trsvcid: 4420 00:09:36.841 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:36.841 traddr: 192.168.100.8 00:09:36.841 eflags: none 00:09:36.841 rdma_prtype: not specified 00:09:36.841 rdma_qptype: connected 00:09:36.841 rdma_cms: rdma-cm 00:09:36.841 rdma_pkey: 0x0000 00:09:36.841 =====Discovery Log Entry 3====== 00:09:36.841 trtype: rdma 00:09:36.841 adrfam: ipv4 00:09:36.841 subtype: nvme subsystem 00:09:36.841 treq: not required 00:09:36.841 portid: 0 00:09:36.841 trsvcid: 4420 00:09:36.841 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:36.841 traddr: 192.168.100.8 00:09:36.841 eflags: none 00:09:36.841 rdma_prtype: not specified 00:09:36.841 rdma_qptype: connected 00:09:36.841 rdma_cms: rdma-cm 00:09:36.841 rdma_pkey: 0x0000 00:09:36.841 =====Discovery Log Entry 4====== 00:09:36.841 trtype: rdma 00:09:36.841 adrfam: ipv4 00:09:36.841 subtype: nvme subsystem 00:09:36.841 treq: not required 00:09:36.841 portid: 0 00:09:36.841 trsvcid: 4420 00:09:36.841 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:36.841 traddr: 192.168.100.8 00:09:36.841 eflags: none 00:09:36.841 rdma_prtype: not specified 00:09:36.841 rdma_qptype: connected 00:09:36.841 rdma_cms: rdma-cm 00:09:36.841 rdma_pkey: 0x0000 00:09:36.841 =====Discovery Log Entry 5====== 00:09:36.841 trtype: rdma 00:09:36.841 adrfam: ipv4 00:09:36.841 subtype: discovery subsystem referral 00:09:36.841 treq: not required 00:09:36.841 portid: 0 00:09:36.841 trsvcid: 4430 00:09:36.841 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:36.841 traddr: 192.168.100.8 00:09:36.841 eflags: none 00:09:36.841 rdma_prtype: unrecognized 00:09:36.841 rdma_qptype: unrecognized 00:09:36.841 rdma_cms: unrecognized 00:09:36.841 rdma_pkey: 0x0000 00:09:36.841 14:46:36 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:36.841 Perform nvmf subsystem discovery via RPC 00:09:36.841 14:46:36 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:36.841 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.841 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 [2024-04-26 14:46:36.699259] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:36.841 [ 00:09:36.841 { 00:09:36.841 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:36.841 "subtype": "Discovery", 00:09:36.841 "listen_addresses": [ 00:09:36.841 { 00:09:36.841 "transport": "RDMA", 00:09:36.841 "trtype": "RDMA", 00:09:36.841 "adrfam": "IPv4", 00:09:36.841 "traddr": "192.168.100.8", 00:09:36.841 "trsvcid": "4420" 00:09:36.841 } 00:09:36.841 ], 00:09:36.841 "allow_any_host": true, 00:09:36.841 "hosts": [] 00:09:36.841 }, 00:09:36.841 { 00:09:36.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.841 "subtype": "NVMe", 00:09:36.841 "listen_addresses": [ 00:09:36.841 { 00:09:36.841 "transport": "RDMA", 00:09:36.841 "trtype": "RDMA", 00:09:36.841 "adrfam": "IPv4", 00:09:36.841 "traddr": "192.168.100.8", 00:09:36.841 "trsvcid": "4420" 00:09:36.841 } 00:09:36.841 ], 00:09:36.841 "allow_any_host": true, 00:09:36.841 "hosts": [], 00:09:36.841 "serial_number": "SPDK00000000000001", 00:09:36.841 "model_number": "SPDK bdev Controller", 00:09:36.841 "max_namespaces": 32, 00:09:36.841 "min_cntlid": 1, 00:09:36.841 "max_cntlid": 65519, 00:09:36.841 "namespaces": [ 00:09:36.841 { 00:09:36.841 "nsid": 1, 00:09:36.841 "bdev_name": "Null1", 00:09:36.841 "name": "Null1", 00:09:36.841 "nguid": "9DC374AAC18242AA8E25F98CCC5DF29A", 00:09:36.841 "uuid": "9dc374aa-c182-42aa-8e25-f98ccc5df29a" 00:09:36.841 } 00:09:36.841 ] 00:09:36.841 }, 00:09:36.841 { 00:09:36.841 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:36.841 "subtype": "NVMe", 00:09:36.841 "listen_addresses": [ 00:09:36.841 { 00:09:36.841 "transport": "RDMA", 00:09:36.841 "trtype": "RDMA", 00:09:36.841 "adrfam": "IPv4", 00:09:36.841 "traddr": "192.168.100.8", 00:09:36.841 "trsvcid": "4420" 00:09:36.841 } 00:09:36.841 ], 00:09:36.841 "allow_any_host": true, 00:09:36.841 "hosts": [], 00:09:36.841 "serial_number": "SPDK00000000000002", 00:09:36.841 "model_number": "SPDK bdev Controller", 00:09:36.841 "max_namespaces": 32, 00:09:36.841 "min_cntlid": 1, 00:09:36.841 "max_cntlid": 65519, 00:09:36.841 "namespaces": [ 00:09:36.841 { 00:09:36.841 "nsid": 1, 00:09:36.841 "bdev_name": "Null2", 00:09:36.841 "name": "Null2", 00:09:36.841 "nguid": "3CB32578D2CC4F48B6E4D27C0C926640", 00:09:36.841 "uuid": "3cb32578-d2cc-4f48-b6e4-d27c0c926640" 00:09:36.841 } 00:09:36.841 ] 00:09:36.841 }, 00:09:36.841 { 00:09:36.841 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:36.841 "subtype": "NVMe", 00:09:36.841 "listen_addresses": [ 00:09:36.841 { 00:09:36.841 "transport": "RDMA", 00:09:36.841 "trtype": "RDMA", 00:09:36.841 "adrfam": "IPv4", 00:09:36.841 "traddr": "192.168.100.8", 00:09:36.841 "trsvcid": "4420" 00:09:36.841 } 00:09:36.841 ], 00:09:36.841 "allow_any_host": true, 00:09:36.841 "hosts": [], 00:09:36.841 "serial_number": "SPDK00000000000003", 00:09:36.841 "model_number": "SPDK bdev Controller", 00:09:36.841 "max_namespaces": 32, 00:09:36.841 "min_cntlid": 1, 00:09:36.841 "max_cntlid": 65519, 00:09:36.841 "namespaces": [ 00:09:36.841 { 00:09:36.841 "nsid": 1, 00:09:36.841 "bdev_name": "Null3", 00:09:36.841 "name": "Null3", 00:09:36.841 "nguid": "5B006CAC6B32492E97A653C4E3B9DCD3", 00:09:36.841 "uuid": "5b006cac-6b32-492e-97a6-53c4e3b9dcd3" 00:09:36.841 } 00:09:36.841 ] 00:09:36.841 }, 00:09:36.841 { 00:09:36.841 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:36.841 "subtype": "NVMe", 00:09:36.841 "listen_addresses": [ 00:09:36.841 { 00:09:36.841 "transport": "RDMA", 00:09:36.841 "trtype": "RDMA", 00:09:36.841 "adrfam": "IPv4", 00:09:36.841 "traddr": "192.168.100.8", 00:09:36.841 "trsvcid": "4420" 00:09:36.841 } 00:09:36.841 ], 00:09:36.841 "allow_any_host": true, 00:09:36.841 "hosts": [], 00:09:36.841 "serial_number": "SPDK00000000000004", 00:09:36.841 "model_number": "SPDK bdev Controller", 00:09:36.841 "max_namespaces": 32, 00:09:36.841 "min_cntlid": 1, 00:09:36.841 "max_cntlid": 65519, 00:09:36.841 "namespaces": [ 00:09:36.841 { 00:09:36.841 "nsid": 1, 00:09:36.841 "bdev_name": "Null4", 00:09:36.841 "name": "Null4", 00:09:36.841 "nguid": "E86CB079B3234A919A6F6C19244C4159", 00:09:36.841 "uuid": "e86cb079-b323-4a91-9a6f-6c19244c4159" 00:09:36.841 } 00:09:36.841 ] 00:09:36.841 } 00:09:36.841 ] 00:09:36.841 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.841 14:46:36 -- target/discovery.sh@42 -- # seq 1 4 00:09:36.841 14:46:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:36.841 14:46:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.841 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.841 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.841 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.841 14:46:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:36.841 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.841 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:36.842 14:46:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:36.842 14:46:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:36.842 14:46:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:36.842 14:46:36 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:36.842 14:46:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.842 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:09:36.842 14:46:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.842 14:46:36 -- target/discovery.sh@49 -- # check_bdevs= 00:09:36.842 14:46:36 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:36.842 14:46:36 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:36.842 14:46:36 -- target/discovery.sh@57 -- # nvmftestfini 00:09:36.842 14:46:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:36.842 14:46:36 -- nvmf/common.sh@117 -- # sync 00:09:36.842 14:46:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:36.842 14:46:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:36.842 14:46:36 -- nvmf/common.sh@120 -- # set +e 00:09:36.842 14:46:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.842 14:46:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:36.842 rmmod nvme_rdma 00:09:36.842 rmmod nvme_fabrics 00:09:36.842 14:46:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.842 14:46:36 -- nvmf/common.sh@124 -- # set -e 00:09:36.842 14:46:36 -- nvmf/common.sh@125 -- # return 0 00:09:36.842 14:46:36 -- nvmf/common.sh@478 -- # '[' -n 142440 ']' 00:09:36.842 14:46:36 -- nvmf/common.sh@479 -- # killprocess 142440 00:09:36.842 14:46:36 -- common/autotest_common.sh@936 -- # '[' -z 142440 ']' 00:09:36.842 14:46:36 -- common/autotest_common.sh@940 -- # kill -0 142440 00:09:36.842 14:46:36 -- common/autotest_common.sh@941 -- # uname 00:09:36.842 14:46:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:36.842 14:46:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142440 00:09:36.842 14:46:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:36.842 14:46:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:36.842 14:46:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142440' 00:09:36.842 killing process with pid 142440 00:09:36.842 14:46:36 -- common/autotest_common.sh@955 -- # kill 142440 00:09:36.842 [2024-04-26 14:46:36.896353] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:36.842 14:46:36 -- common/autotest_common.sh@960 -- # wait 142440 00:09:37.407 [2024-04-26 14:46:37.426337] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:38.780 14:46:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:38.780 14:46:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:38.780 00:09:38.780 real 0m5.672s 00:09:38.780 user 0m11.128s 00:09:38.780 sys 0m1.973s 00:09:38.780 14:46:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.780 14:46:38 -- common/autotest_common.sh@10 -- # set +x 00:09:38.780 ************************************ 00:09:38.780 END TEST nvmf_discovery 00:09:38.780 ************************************ 00:09:38.780 14:46:38 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:38.780 14:46:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:38.780 14:46:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.780 14:46:38 -- common/autotest_common.sh@10 -- # set +x 00:09:38.780 ************************************ 00:09:38.780 START TEST nvmf_referrals 00:09:38.780 ************************************ 00:09:38.780 14:46:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:38.780 * Looking for test storage... 00:09:38.780 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:38.780 14:46:38 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.780 14:46:38 -- nvmf/common.sh@7 -- # uname -s 00:09:38.780 14:46:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.780 14:46:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.780 14:46:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.780 14:46:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.780 14:46:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.780 14:46:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.780 14:46:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.780 14:46:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.780 14:46:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.039 14:46:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.039 14:46:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:39.039 14:46:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:39.039 14:46:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.039 14:46:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.039 14:46:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.039 14:46:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.039 14:46:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:39.039 14:46:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.039 14:46:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.039 14:46:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.039 14:46:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.039 14:46:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.039 14:46:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.039 14:46:38 -- paths/export.sh@5 -- # export PATH 00:09:39.039 14:46:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.039 14:46:38 -- nvmf/common.sh@47 -- # : 0 00:09:39.039 14:46:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.039 14:46:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.039 14:46:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.039 14:46:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.039 14:46:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.039 14:46:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.039 14:46:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.039 14:46:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.039 14:46:38 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:39.039 14:46:38 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:39.039 14:46:38 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:39.039 14:46:38 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:39.039 14:46:38 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:39.039 14:46:38 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:39.039 14:46:38 -- target/referrals.sh@37 -- # nvmftestinit 00:09:39.039 14:46:38 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:39.039 14:46:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.039 14:46:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:39.039 14:46:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:39.039 14:46:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:39.039 14:46:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.039 14:46:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.039 14:46:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.039 14:46:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:39.039 14:46:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:39.039 14:46:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.039 14:46:38 -- common/autotest_common.sh@10 -- # set +x 00:09:40.942 14:46:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:40.942 14:46:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.942 14:46:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.942 14:46:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.942 14:46:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.942 14:46:40 -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.942 14:46:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@296 -- # e810=() 00:09:40.942 14:46:40 -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.942 14:46:40 -- nvmf/common.sh@297 -- # x722=() 00:09:40.942 14:46:40 -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.942 14:46:40 -- nvmf/common.sh@298 -- # mlx=() 00:09:40.942 14:46:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.942 14:46:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.942 14:46:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:40.942 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:40.942 14:46:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.942 14:46:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:40.942 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:40.942 14:46:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.942 14:46:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.942 14:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.942 14:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:40.942 Found net devices under 0000:09:00.0: mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.942 14:46:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.942 14:46:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:40.942 Found net devices under 0000:09:00.1: mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.942 14:46:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:40.942 14:46:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:40.942 14:46:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:40.942 14:46:40 -- nvmf/common.sh@58 -- # uname 00:09:40.942 14:46:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:40.942 14:46:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:40.942 14:46:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:40.942 14:46:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:40.942 14:46:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:40.942 14:46:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:40.942 14:46:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:40.942 14:46:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:40.942 14:46:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:40.942 14:46:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:40.942 14:46:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:40.942 14:46:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.942 14:46:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.942 14:46:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@105 -- # continue 2 00:09:40.942 14:46:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@105 -- # continue 2 00:09:40.942 14:46:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.942 14:46:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.942 14:46:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:40.942 14:46:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:40.942 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.942 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:09:40.942 altname enp9s0f0np0 00:09:40.942 inet 192.168.100.8/24 scope global mlx_0_0 00:09:40.942 valid_lft forever preferred_lft forever 00:09:40.942 14:46:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.942 14:46:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.942 14:46:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.942 14:46:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:40.942 14:46:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:40.942 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.942 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:09:40.942 altname enp9s0f1np1 00:09:40.942 inet 192.168.100.9/24 scope global mlx_0_1 00:09:40.942 valid_lft forever preferred_lft forever 00:09:40.942 14:46:40 -- nvmf/common.sh@411 -- # return 0 00:09:40.942 14:46:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:40.942 14:46:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:40.942 14:46:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:40.942 14:46:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:40.942 14:46:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.942 14:46:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.942 14:46:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.942 14:46:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.942 14:46:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@105 -- # continue 2 00:09:40.942 14:46:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.942 14:46:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.942 14:46:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.942 14:46:40 -- nvmf/common.sh@105 -- # continue 2 00:09:40.942 14:46:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.942 14:46:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:40.942 14:46:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.943 14:46:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.943 14:46:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:40.943 14:46:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.943 14:46:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.943 14:46:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:40.943 192.168.100.9' 00:09:40.943 14:46:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:40.943 192.168.100.9' 00:09:40.943 14:46:40 -- nvmf/common.sh@446 -- # head -n 1 00:09:40.943 14:46:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:40.943 14:46:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:40.943 192.168.100.9' 00:09:40.943 14:46:40 -- nvmf/common.sh@447 -- # tail -n +2 00:09:40.943 14:46:40 -- nvmf/common.sh@447 -- # head -n 1 00:09:40.943 14:46:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:40.943 14:46:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:40.943 14:46:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:40.943 14:46:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:40.943 14:46:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:40.943 14:46:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:40.943 14:46:40 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:40.943 14:46:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:40.943 14:46:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:40.943 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:09:40.943 14:46:40 -- nvmf/common.sh@470 -- # nvmfpid=144542 00:09:40.943 14:46:40 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.943 14:46:40 -- nvmf/common.sh@471 -- # waitforlisten 144542 00:09:40.943 14:46:40 -- common/autotest_common.sh@817 -- # '[' -z 144542 ']' 00:09:40.943 14:46:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.943 14:46:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:40.943 14:46:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.943 14:46:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:40.943 14:46:40 -- common/autotest_common.sh@10 -- # set +x 00:09:41.200 [2024-04-26 14:46:41.055594] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:41.200 [2024-04-26 14:46:41.055731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.200 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.200 [2024-04-26 14:46:41.178374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.458 [2024-04-26 14:46:41.425425] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.458 [2024-04-26 14:46:41.425503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.458 [2024-04-26 14:46:41.425531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.458 [2024-04-26 14:46:41.425554] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.458 [2024-04-26 14:46:41.425573] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.458 [2024-04-26 14:46:41.425699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.458 [2024-04-26 14:46:41.425758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.458 [2024-04-26 14:46:41.425803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.458 [2024-04-26 14:46:41.425814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.025 14:46:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:42.025 14:46:41 -- common/autotest_common.sh@850 -- # return 0 00:09:42.025 14:46:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:42.025 14:46:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:42.025 14:46:41 -- common/autotest_common.sh@10 -- # set +x 00:09:42.025 14:46:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.025 14:46:41 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:42.025 14:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.025 14:46:41 -- common/autotest_common.sh@10 -- # set +x 00:09:42.025 [2024-04-26 14:46:42.012895] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f3a86ee9940) succeed. 00:09:42.025 [2024-04-26 14:46:42.023720] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f3a86ea4940) succeed. 00:09:42.283 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.283 14:46:42 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:42.283 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.283 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.283 [2024-04-26 14:46:42.329977] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:42.283 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.283 14:46:42 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:42.283 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.283 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.283 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.283 14:46:42 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:42.283 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.283 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.283 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.283 14:46:42 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:42.283 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.283 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.283 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.283 14:46:42 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.283 14:46:42 -- target/referrals.sh@48 -- # jq length 00:09:42.283 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.283 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:42.541 14:46:42 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:42.541 14:46:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:42.541 14:46:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.541 14:46:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:42.541 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.541 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- target/referrals.sh@21 -- # sort 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:42.541 14:46:42 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:42.541 14:46:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.541 14:46:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # sort 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:42.541 14:46:42 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:42.541 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.541 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:42.541 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.541 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:42.541 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.541 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.541 14:46:42 -- target/referrals.sh@56 -- # jq length 00:09:42.541 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.541 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.541 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.541 14:46:42 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:42.541 14:46:42 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:42.541 14:46:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.541 14:46:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.541 14:46:42 -- target/referrals.sh@26 -- # sort 00:09:42.799 14:46:42 -- target/referrals.sh@26 -- # echo 00:09:42.799 14:46:42 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:42.799 14:46:42 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:42.799 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.799 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:42.799 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.799 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:42.799 14:46:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:42.799 14:46:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.799 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:42.799 14:46:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:42.799 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:42.799 14:46:42 -- target/referrals.sh@21 -- # sort 00:09:42.799 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:42.799 14:46:42 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:42.799 14:46:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.799 14:46:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.799 14:46:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.799 14:46:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.799 14:46:42 -- target/referrals.sh@26 -- # sort 00:09:42.799 14:46:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:42.799 14:46:42 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:42.799 14:46:42 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:42.799 14:46:42 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:42.799 14:46:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.799 14:46:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:42.799 14:46:42 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:42.799 14:46:42 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:42.799 14:46:42 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:42.799 14:46:42 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:42.799 14:46:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.799 14:46:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:43.057 14:46:42 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:43.057 14:46:42 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:43.057 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.057 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.057 14:46:42 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:43.057 14:46:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:43.057 14:46:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.057 14:46:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:43.057 14:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.057 14:46:42 -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 14:46:42 -- target/referrals.sh@21 -- # sort 00:09:43.057 14:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.057 14:46:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:43.057 14:46:42 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:43.057 14:46:42 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:43.057 14:46:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.057 14:46:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.057 14:46:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:43.057 14:46:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.057 14:46:42 -- target/referrals.sh@26 -- # sort 00:09:43.057 14:46:43 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:43.057 14:46:43 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:43.057 14:46:43 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:43.057 14:46:43 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:43.057 14:46:43 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:43.057 14:46:43 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:43.057 14:46:43 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:43.057 14:46:43 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:43.057 14:46:43 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:43.058 14:46:43 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:43.058 14:46:43 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:43.058 14:46:43 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:43.058 14:46:43 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:43.315 14:46:43 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:43.315 14:46:43 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:43.315 14:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.315 14:46:43 -- common/autotest_common.sh@10 -- # set +x 00:09:43.315 14:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.315 14:46:43 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:43.315 14:46:43 -- target/referrals.sh@82 -- # jq length 00:09:43.315 14:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.315 14:46:43 -- common/autotest_common.sh@10 -- # set +x 00:09:43.315 14:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.315 14:46:43 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:43.315 14:46:43 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:43.315 14:46:43 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:43.315 14:46:43 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:43.315 14:46:43 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:43.316 14:46:43 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:43.316 14:46:43 -- target/referrals.sh@26 -- # sort 00:09:43.316 14:46:43 -- target/referrals.sh@26 -- # echo 00:09:43.316 14:46:43 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:43.316 14:46:43 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:43.316 14:46:43 -- target/referrals.sh@86 -- # nvmftestfini 00:09:43.316 14:46:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:43.316 14:46:43 -- nvmf/common.sh@117 -- # sync 00:09:43.316 14:46:43 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:43.316 14:46:43 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:43.316 14:46:43 -- nvmf/common.sh@120 -- # set +e 00:09:43.316 14:46:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.316 14:46:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:43.316 rmmod nvme_rdma 00:09:43.316 rmmod nvme_fabrics 00:09:43.316 14:46:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.316 14:46:43 -- nvmf/common.sh@124 -- # set -e 00:09:43.316 14:46:43 -- nvmf/common.sh@125 -- # return 0 00:09:43.316 14:46:43 -- nvmf/common.sh@478 -- # '[' -n 144542 ']' 00:09:43.316 14:46:43 -- nvmf/common.sh@479 -- # killprocess 144542 00:09:43.316 14:46:43 -- common/autotest_common.sh@936 -- # '[' -z 144542 ']' 00:09:43.316 14:46:43 -- common/autotest_common.sh@940 -- # kill -0 144542 00:09:43.316 14:46:43 -- common/autotest_common.sh@941 -- # uname 00:09:43.316 14:46:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.316 14:46:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144542 00:09:43.316 14:46:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:43.316 14:46:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:43.316 14:46:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144542' 00:09:43.316 killing process with pid 144542 00:09:43.316 14:46:43 -- common/autotest_common.sh@955 -- # kill 144542 00:09:43.316 14:46:43 -- common/autotest_common.sh@960 -- # wait 144542 00:09:43.882 [2024-04-26 14:46:43.927779] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:45.257 14:46:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:45.257 14:46:45 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:45.257 00:09:45.257 real 0m6.458s 00:09:45.257 user 0m14.547s 00:09:45.257 sys 0m2.115s 00:09:45.257 14:46:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:45.257 14:46:45 -- common/autotest_common.sh@10 -- # set +x 00:09:45.257 ************************************ 00:09:45.257 END TEST nvmf_referrals 00:09:45.257 ************************************ 00:09:45.257 14:46:45 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:45.257 14:46:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:45.257 14:46:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.257 14:46:45 -- common/autotest_common.sh@10 -- # set +x 00:09:45.516 ************************************ 00:09:45.516 START TEST nvmf_connect_disconnect 00:09:45.516 ************************************ 00:09:45.516 14:46:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:45.516 * Looking for test storage... 00:09:45.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.516 14:46:45 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.516 14:46:45 -- nvmf/common.sh@7 -- # uname -s 00:09:45.516 14:46:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.516 14:46:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.516 14:46:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.516 14:46:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.516 14:46:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.516 14:46:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.516 14:46:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.516 14:46:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.516 14:46:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.516 14:46:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.516 14:46:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:09:45.516 14:46:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:09:45.516 14:46:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.516 14:46:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.516 14:46:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.516 14:46:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.516 14:46:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.516 14:46:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.516 14:46:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.516 14:46:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.516 14:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.516 14:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.516 14:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.516 14:46:45 -- paths/export.sh@5 -- # export PATH 00:09:45.516 14:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.516 14:46:45 -- nvmf/common.sh@47 -- # : 0 00:09:45.516 14:46:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.516 14:46:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.516 14:46:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.516 14:46:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.516 14:46:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.516 14:46:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.516 14:46:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.516 14:46:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.516 14:46:45 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.516 14:46:45 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.516 14:46:45 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:45.516 14:46:45 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:45.516 14:46:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.516 14:46:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:45.516 14:46:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:45.516 14:46:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:45.516 14:46:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.516 14:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.516 14:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.516 14:46:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:45.516 14:46:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:45.516 14:46:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.516 14:46:45 -- common/autotest_common.sh@10 -- # set +x 00:09:47.416 14:46:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:47.416 14:46:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.416 14:46:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.416 14:46:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.416 14:46:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.416 14:46:47 -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.416 14:46:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@296 -- # e810=() 00:09:47.416 14:46:47 -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.416 14:46:47 -- nvmf/common.sh@297 -- # x722=() 00:09:47.416 14:46:47 -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.416 14:46:47 -- nvmf/common.sh@298 -- # mlx=() 00:09:47.416 14:46:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.416 14:46:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.416 14:46:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:47.416 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:47.416 14:46:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.416 14:46:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:47.416 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:47.416 14:46:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.416 14:46:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.416 14:46:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.416 14:46:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:47.416 Found net devices under 0000:09:00.0: mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.416 14:46:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.416 14:46:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:47.416 Found net devices under 0000:09:00.1: mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.416 14:46:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:47.416 14:46:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:47.416 14:46:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:47.416 14:46:47 -- nvmf/common.sh@58 -- # uname 00:09:47.416 14:46:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:47.416 14:46:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:47.416 14:46:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:47.416 14:46:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:47.416 14:46:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:47.416 14:46:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:47.416 14:46:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:47.416 14:46:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:47.416 14:46:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:47.416 14:46:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:47.416 14:46:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:47.416 14:46:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.416 14:46:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.416 14:46:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@105 -- # continue 2 00:09:47.416 14:46:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@105 -- # continue 2 00:09:47.416 14:46:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.416 14:46:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.416 14:46:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:47.416 14:46:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:47.416 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.416 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:09:47.416 altname enp9s0f0np0 00:09:47.416 inet 192.168.100.8/24 scope global mlx_0_0 00:09:47.416 valid_lft forever preferred_lft forever 00:09:47.416 14:46:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.416 14:46:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.416 14:46:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:47.416 14:46:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:47.416 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.416 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:09:47.416 altname enp9s0f1np1 00:09:47.416 inet 192.168.100.9/24 scope global mlx_0_1 00:09:47.416 valid_lft forever preferred_lft forever 00:09:47.416 14:46:47 -- nvmf/common.sh@411 -- # return 0 00:09:47.416 14:46:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:47.416 14:46:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:47.416 14:46:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:47.416 14:46:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:47.416 14:46:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.416 14:46:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.416 14:46:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.416 14:46:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.416 14:46:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@105 -- # continue 2 00:09:47.416 14:46:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.416 14:46:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.416 14:46:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@105 -- # continue 2 00:09:47.416 14:46:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.416 14:46:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.416 14:46:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.416 14:46:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.416 14:46:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.416 14:46:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:47.416 192.168.100.9' 00:09:47.674 14:46:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:47.674 192.168.100.9' 00:09:47.674 14:46:47 -- nvmf/common.sh@446 -- # head -n 1 00:09:47.674 14:46:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:47.674 14:46:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:47.674 192.168.100.9' 00:09:47.674 14:46:47 -- nvmf/common.sh@447 -- # tail -n +2 00:09:47.674 14:46:47 -- nvmf/common.sh@447 -- # head -n 1 00:09:47.674 14:46:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:47.674 14:46:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:47.674 14:46:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:47.674 14:46:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:47.674 14:46:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:47.674 14:46:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:47.674 14:46:47 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:47.674 14:46:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:47.674 14:46:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:47.674 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:09:47.674 14:46:47 -- nvmf/common.sh@470 -- # nvmfpid=146840 00:09:47.674 14:46:47 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.674 14:46:47 -- nvmf/common.sh@471 -- # waitforlisten 146840 00:09:47.674 14:46:47 -- common/autotest_common.sh@817 -- # '[' -z 146840 ']' 00:09:47.674 14:46:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.674 14:46:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:47.674 14:46:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.674 14:46:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:47.674 14:46:47 -- common/autotest_common.sh@10 -- # set +x 00:09:47.674 [2024-04-26 14:46:47.601285] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:47.674 [2024-04-26 14:46:47.601428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.674 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.674 [2024-04-26 14:46:47.720169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.986 [2024-04-26 14:46:47.965893] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.986 [2024-04-26 14:46:47.965975] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.986 [2024-04-26 14:46:47.966003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.986 [2024-04-26 14:46:47.966027] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.986 [2024-04-26 14:46:47.966047] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.986 [2024-04-26 14:46:47.966183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.986 [2024-04-26 14:46:47.966246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.986 [2024-04-26 14:46:47.966292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.986 [2024-04-26 14:46:47.966299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.557 14:46:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.557 14:46:48 -- common/autotest_common.sh@850 -- # return 0 00:09:48.557 14:46:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:48.557 14:46:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:48.557 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.557 14:46:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.557 14:46:48 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:48.557 14:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.557 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.557 [2024-04-26 14:46:48.569179] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:48.557 [2024-04-26 14:46:48.594467] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f6472460940) succeed. 00:09:48.557 [2024-04-26 14:46:48.605382] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f647241c940) succeed. 00:09:48.815 14:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:48.815 14:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.815 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.815 14:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.815 14:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.815 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.815 14:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.815 14:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.815 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.815 14:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:48.815 14:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.815 14:46:48 -- common/autotest_common.sh@10 -- # set +x 00:09:48.815 [2024-04-26 14:46:48.874644] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:48.815 14:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:48.815 14:46:48 -- target/connect_disconnect.sh@34 -- # set +x 00:09:56.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.335 14:47:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:29.335 14:47:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:29.335 14:47:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:29.335 14:47:28 -- nvmf/common.sh@117 -- # sync 00:10:29.335 14:47:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:29.335 14:47:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:29.335 14:47:28 -- nvmf/common.sh@120 -- # set +e 00:10:29.335 14:47:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.335 14:47:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:29.335 rmmod nvme_rdma 00:10:29.335 rmmod nvme_fabrics 00:10:29.335 14:47:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.335 14:47:28 -- nvmf/common.sh@124 -- # set -e 00:10:29.335 14:47:28 -- nvmf/common.sh@125 -- # return 0 00:10:29.335 14:47:28 -- nvmf/common.sh@478 -- # '[' -n 146840 ']' 00:10:29.335 14:47:28 -- nvmf/common.sh@479 -- # killprocess 146840 00:10:29.335 14:47:28 -- common/autotest_common.sh@936 -- # '[' -z 146840 ']' 00:10:29.335 14:47:28 -- common/autotest_common.sh@940 -- # kill -0 146840 00:10:29.335 14:47:28 -- common/autotest_common.sh@941 -- # uname 00:10:29.335 14:47:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.335 14:47:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146840 00:10:29.335 14:47:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:29.335 14:47:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:29.335 14:47:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146840' 00:10:29.335 killing process with pid 146840 00:10:29.335 14:47:28 -- common/autotest_common.sh@955 -- # kill 146840 00:10:29.335 14:47:28 -- common/autotest_common.sh@960 -- # wait 146840 00:10:29.335 [2024-04-26 14:47:28.770177] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:30.269 14:47:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:30.269 14:47:30 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:30.269 00:10:30.269 real 0m44.822s 00:10:30.269 user 2m46.685s 00:10:30.269 sys 0m2.749s 00:10:30.269 14:47:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.269 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:10:30.269 ************************************ 00:10:30.269 END TEST nvmf_connect_disconnect 00:10:30.269 ************************************ 00:10:30.269 14:47:30 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:30.269 14:47:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:30.269 14:47:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.269 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:10:30.269 ************************************ 00:10:30.269 START TEST nvmf_multitarget 00:10:30.269 ************************************ 00:10:30.269 14:47:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:30.527 * Looking for test storage... 00:10:30.527 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:30.527 14:47:30 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.527 14:47:30 -- nvmf/common.sh@7 -- # uname -s 00:10:30.527 14:47:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.527 14:47:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.527 14:47:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.527 14:47:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.527 14:47:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.527 14:47:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.527 14:47:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.527 14:47:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.527 14:47:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.527 14:47:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.527 14:47:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:30.527 14:47:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:30.527 14:47:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.527 14:47:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.527 14:47:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.527 14:47:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.527 14:47:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.527 14:47:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.527 14:47:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.527 14:47:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.527 14:47:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.527 14:47:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.527 14:47:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.527 14:47:30 -- paths/export.sh@5 -- # export PATH 00:10:30.527 14:47:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.527 14:47:30 -- nvmf/common.sh@47 -- # : 0 00:10:30.527 14:47:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.527 14:47:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.527 14:47:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.527 14:47:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.527 14:47:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.527 14:47:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.527 14:47:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.527 14:47:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.527 14:47:30 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:30.527 14:47:30 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:30.527 14:47:30 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:30.527 14:47:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.527 14:47:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:30.527 14:47:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:30.527 14:47:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:30.527 14:47:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.527 14:47:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.527 14:47:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.527 14:47:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:30.527 14:47:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:30.527 14:47:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.527 14:47:30 -- common/autotest_common.sh@10 -- # set +x 00:10:32.433 14:47:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:32.433 14:47:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.433 14:47:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.433 14:47:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.433 14:47:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.433 14:47:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.433 14:47:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.433 14:47:32 -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.433 14:47:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.433 14:47:32 -- nvmf/common.sh@296 -- # e810=() 00:10:32.433 14:47:32 -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.433 14:47:32 -- nvmf/common.sh@297 -- # x722=() 00:10:32.433 14:47:32 -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.433 14:47:32 -- nvmf/common.sh@298 -- # mlx=() 00:10:32.433 14:47:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.433 14:47:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.433 14:47:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.434 14:47:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.434 14:47:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.434 14:47:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:32.434 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:32.434 14:47:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.434 14:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:32.434 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:32.434 14:47:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.434 14:47:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.434 14:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.434 14:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:32.434 Found net devices under 0000:09:00.0: mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.434 14:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.434 14:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:32.434 Found net devices under 0000:09:00.1: mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.434 14:47:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:32.434 14:47:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:32.434 14:47:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:32.434 14:47:32 -- nvmf/common.sh@58 -- # uname 00:10:32.434 14:47:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:32.434 14:47:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:32.434 14:47:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:32.434 14:47:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:32.434 14:47:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:32.434 14:47:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:32.434 14:47:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:32.434 14:47:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:32.434 14:47:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:32.434 14:47:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:32.434 14:47:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:32.434 14:47:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.434 14:47:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:32.434 14:47:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:32.434 14:47:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.434 14:47:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@105 -- # continue 2 00:10:32.434 14:47:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@105 -- # continue 2 00:10:32.434 14:47:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:32.434 14:47:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.434 14:47:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:32.434 14:47:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:32.434 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.434 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:10:32.434 altname enp9s0f0np0 00:10:32.434 inet 192.168.100.8/24 scope global mlx_0_0 00:10:32.434 valid_lft forever preferred_lft forever 00:10:32.434 14:47:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:32.434 14:47:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.434 14:47:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:32.434 14:47:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:32.434 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.434 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:10:32.434 altname enp9s0f1np1 00:10:32.434 inet 192.168.100.9/24 scope global mlx_0_1 00:10:32.434 valid_lft forever preferred_lft forever 00:10:32.434 14:47:32 -- nvmf/common.sh@411 -- # return 0 00:10:32.434 14:47:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:32.434 14:47:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:32.434 14:47:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:32.434 14:47:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:32.434 14:47:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.434 14:47:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:32.434 14:47:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:32.434 14:47:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.434 14:47:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:32.434 14:47:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@105 -- # continue 2 00:10:32.434 14:47:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.434 14:47:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.434 14:47:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@105 -- # continue 2 00:10:32.434 14:47:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:32.434 14:47:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.434 14:47:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:32.434 14:47:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.434 14:47:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.434 14:47:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:32.434 192.168.100.9' 00:10:32.434 14:47:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:32.434 192.168.100.9' 00:10:32.434 14:47:32 -- nvmf/common.sh@446 -- # head -n 1 00:10:32.434 14:47:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:32.434 14:47:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:32.434 192.168.100.9' 00:10:32.434 14:47:32 -- nvmf/common.sh@447 -- # tail -n +2 00:10:32.434 14:47:32 -- nvmf/common.sh@447 -- # head -n 1 00:10:32.434 14:47:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:32.434 14:47:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:32.434 14:47:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:32.434 14:47:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:32.434 14:47:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:32.434 14:47:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:32.434 14:47:32 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:32.434 14:47:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:32.434 14:47:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:32.434 14:47:32 -- common/autotest_common.sh@10 -- # set +x 00:10:32.434 14:47:32 -- nvmf/common.sh@470 -- # nvmfpid=153615 00:10:32.434 14:47:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.434 14:47:32 -- nvmf/common.sh@471 -- # waitforlisten 153615 00:10:32.434 14:47:32 -- common/autotest_common.sh@817 -- # '[' -z 153615 ']' 00:10:32.434 14:47:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.434 14:47:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:32.434 14:47:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.434 14:47:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:32.434 14:47:32 -- common/autotest_common.sh@10 -- # set +x 00:10:32.692 [2024-04-26 14:47:32.536345] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:32.692 [2024-04-26 14:47:32.536517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.692 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.692 [2024-04-26 14:47:32.665085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.950 [2024-04-26 14:47:32.917495] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.950 [2024-04-26 14:47:32.917569] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.950 [2024-04-26 14:47:32.917598] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.950 [2024-04-26 14:47:32.917621] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.950 [2024-04-26 14:47:32.917640] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.950 [2024-04-26 14:47:32.917771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.950 [2024-04-26 14:47:32.917828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.950 [2024-04-26 14:47:32.917876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.950 [2024-04-26 14:47:32.917883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.517 14:47:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:33.517 14:47:33 -- common/autotest_common.sh@850 -- # return 0 00:10:33.517 14:47:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:33.517 14:47:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:33.517 14:47:33 -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 14:47:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.517 14:47:33 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:33.517 14:47:33 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:33.517 14:47:33 -- target/multitarget.sh@21 -- # jq length 00:10:33.774 14:47:33 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:33.774 14:47:33 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:33.774 "nvmf_tgt_1" 00:10:33.774 14:47:33 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:33.774 "nvmf_tgt_2" 00:10:34.032 14:47:33 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:34.032 14:47:33 -- target/multitarget.sh@28 -- # jq length 00:10:34.032 14:47:33 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:34.032 14:47:33 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:34.032 true 00:10:34.032 14:47:34 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:34.289 true 00:10:34.289 14:47:34 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:34.289 14:47:34 -- target/multitarget.sh@35 -- # jq length 00:10:34.289 14:47:34 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:34.289 14:47:34 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:34.289 14:47:34 -- target/multitarget.sh@41 -- # nvmftestfini 00:10:34.289 14:47:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:34.289 14:47:34 -- nvmf/common.sh@117 -- # sync 00:10:34.289 14:47:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:34.289 14:47:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:34.289 14:47:34 -- nvmf/common.sh@120 -- # set +e 00:10:34.289 14:47:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.289 14:47:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:34.289 rmmod nvme_rdma 00:10:34.289 rmmod nvme_fabrics 00:10:34.289 14:47:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.289 14:47:34 -- nvmf/common.sh@124 -- # set -e 00:10:34.289 14:47:34 -- nvmf/common.sh@125 -- # return 0 00:10:34.289 14:47:34 -- nvmf/common.sh@478 -- # '[' -n 153615 ']' 00:10:34.289 14:47:34 -- nvmf/common.sh@479 -- # killprocess 153615 00:10:34.289 14:47:34 -- common/autotest_common.sh@936 -- # '[' -z 153615 ']' 00:10:34.289 14:47:34 -- common/autotest_common.sh@940 -- # kill -0 153615 00:10:34.289 14:47:34 -- common/autotest_common.sh@941 -- # uname 00:10:34.289 14:47:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:34.289 14:47:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 153615 00:10:34.289 14:47:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:34.289 14:47:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:34.289 14:47:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 153615' 00:10:34.289 killing process with pid 153615 00:10:34.289 14:47:34 -- common/autotest_common.sh@955 -- # kill 153615 00:10:34.289 14:47:34 -- common/autotest_common.sh@960 -- # wait 153615 00:10:35.661 14:47:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:35.661 14:47:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:35.661 00:10:35.661 real 0m5.267s 00:10:35.661 user 0m11.367s 00:10:35.661 sys 0m2.000s 00:10:35.661 14:47:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.661 14:47:35 -- common/autotest_common.sh@10 -- # set +x 00:10:35.661 ************************************ 00:10:35.661 END TEST nvmf_multitarget 00:10:35.661 ************************************ 00:10:35.661 14:47:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:35.661 14:47:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:35.661 14:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.661 14:47:35 -- common/autotest_common.sh@10 -- # set +x 00:10:35.661 ************************************ 00:10:35.661 START TEST nvmf_rpc 00:10:35.661 ************************************ 00:10:35.661 14:47:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:35.919 * Looking for test storage... 00:10:35.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:35.919 14:47:35 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.919 14:47:35 -- nvmf/common.sh@7 -- # uname -s 00:10:35.919 14:47:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.919 14:47:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.919 14:47:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.919 14:47:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.919 14:47:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.919 14:47:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.919 14:47:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.919 14:47:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.919 14:47:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.919 14:47:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.919 14:47:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:35.919 14:47:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:10:35.919 14:47:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.920 14:47:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.920 14:47:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.920 14:47:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.920 14:47:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:35.920 14:47:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.920 14:47:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.920 14:47:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.920 14:47:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.920 14:47:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.920 14:47:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.920 14:47:35 -- paths/export.sh@5 -- # export PATH 00:10:35.920 14:47:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.920 14:47:35 -- nvmf/common.sh@47 -- # : 0 00:10:35.920 14:47:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.920 14:47:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.920 14:47:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.920 14:47:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.920 14:47:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.920 14:47:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.920 14:47:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.920 14:47:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.920 14:47:35 -- target/rpc.sh@11 -- # loops=5 00:10:35.920 14:47:35 -- target/rpc.sh@23 -- # nvmftestinit 00:10:35.920 14:47:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:35.920 14:47:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.920 14:47:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:35.920 14:47:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:35.920 14:47:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:35.920 14:47:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.920 14:47:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.920 14:47:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.920 14:47:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:35.920 14:47:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:35.920 14:47:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.920 14:47:35 -- common/autotest_common.sh@10 -- # set +x 00:10:37.819 14:47:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:37.819 14:47:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.819 14:47:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.819 14:47:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.819 14:47:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.819 14:47:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.819 14:47:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.819 14:47:37 -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.819 14:47:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.819 14:47:37 -- nvmf/common.sh@296 -- # e810=() 00:10:37.819 14:47:37 -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.819 14:47:37 -- nvmf/common.sh@297 -- # x722=() 00:10:37.819 14:47:37 -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.819 14:47:37 -- nvmf/common.sh@298 -- # mlx=() 00:10:37.819 14:47:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.819 14:47:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.819 14:47:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.819 14:47:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.819 14:47:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:37.819 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:37.819 14:47:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.819 14:47:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.819 14:47:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:37.819 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:37.819 14:47:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:37.819 14:47:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.819 14:47:37 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.819 14:47:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.819 14:47:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:37.819 14:47:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.819 14:47:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:37.819 Found net devices under 0000:09:00.0: mlx_0_0 00:10:37.819 14:47:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.819 14:47:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.819 14:47:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:37.819 14:47:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.819 14:47:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:37.819 Found net devices under 0000:09:00.1: mlx_0_1 00:10:37.819 14:47:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.819 14:47:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:37.819 14:47:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:37.819 14:47:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:37.819 14:47:37 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:37.819 14:47:37 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:37.819 14:47:37 -- nvmf/common.sh@58 -- # uname 00:10:37.819 14:47:37 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:37.819 14:47:37 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:37.819 14:47:37 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:37.819 14:47:37 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:37.819 14:47:37 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:37.819 14:47:37 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:37.819 14:47:37 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:37.819 14:47:37 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:37.819 14:47:37 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:37.819 14:47:37 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:37.819 14:47:37 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:37.819 14:47:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.819 14:47:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:37.819 14:47:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:37.820 14:47:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.820 14:47:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:37.820 14:47:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@105 -- # continue 2 00:10:37.820 14:47:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@105 -- # continue 2 00:10:37.820 14:47:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:37.820 14:47:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.820 14:47:37 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:37.820 14:47:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:37.820 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.820 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:10:37.820 altname enp9s0f0np0 00:10:37.820 inet 192.168.100.8/24 scope global mlx_0_0 00:10:37.820 valid_lft forever preferred_lft forever 00:10:37.820 14:47:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:37.820 14:47:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.820 14:47:37 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:37.820 14:47:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:37.820 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:37.820 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:10:37.820 altname enp9s0f1np1 00:10:37.820 inet 192.168.100.9/24 scope global mlx_0_1 00:10:37.820 valid_lft forever preferred_lft forever 00:10:37.820 14:47:37 -- nvmf/common.sh@411 -- # return 0 00:10:37.820 14:47:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:37.820 14:47:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:37.820 14:47:37 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:37.820 14:47:37 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:37.820 14:47:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:37.820 14:47:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:37.820 14:47:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:37.820 14:47:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:37.820 14:47:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:37.820 14:47:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@105 -- # continue 2 00:10:37.820 14:47:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:37.820 14:47:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:37.820 14:47:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@105 -- # continue 2 00:10:37.820 14:47:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:37.820 14:47:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.820 14:47:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:37.820 14:47:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:37.820 14:47:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:37.820 14:47:37 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:37.820 192.168.100.9' 00:10:37.820 14:47:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:37.820 192.168.100.9' 00:10:37.820 14:47:37 -- nvmf/common.sh@446 -- # head -n 1 00:10:37.820 14:47:37 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:37.820 14:47:37 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:37.820 192.168.100.9' 00:10:37.820 14:47:37 -- nvmf/common.sh@447 -- # tail -n +2 00:10:37.820 14:47:37 -- nvmf/common.sh@447 -- # head -n 1 00:10:37.820 14:47:37 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:37.820 14:47:37 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:37.820 14:47:37 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:37.820 14:47:37 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:37.820 14:47:37 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:37.820 14:47:37 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:37.820 14:47:37 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:37.820 14:47:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:37.820 14:47:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:37.820 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 14:47:37 -- nvmf/common.sh@470 -- # nvmfpid=155711 00:10:37.820 14:47:37 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.820 14:47:37 -- nvmf/common.sh@471 -- # waitforlisten 155711 00:10:37.820 14:47:37 -- common/autotest_common.sh@817 -- # '[' -z 155711 ']' 00:10:37.820 14:47:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.820 14:47:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.820 14:47:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.820 14:47:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.820 14:47:37 -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 [2024-04-26 14:47:37.799840] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:37.820 [2024-04-26 14:47:37.799964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.820 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.077 [2024-04-26 14:47:37.928177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.333 [2024-04-26 14:47:38.185791] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.333 [2024-04-26 14:47:38.185857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.333 [2024-04-26 14:47:38.185886] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.333 [2024-04-26 14:47:38.185910] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.333 [2024-04-26 14:47:38.185929] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.333 [2024-04-26 14:47:38.186055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.333 [2024-04-26 14:47:38.189161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.333 [2024-04-26 14:47:38.189209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.333 [2024-04-26 14:47:38.189214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.898 14:47:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:38.898 14:47:38 -- common/autotest_common.sh@850 -- # return 0 00:10:38.898 14:47:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:38.898 14:47:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:38.898 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:10:38.898 14:47:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.898 14:47:38 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:38.898 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.898 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:10:38.898 14:47:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.898 14:47:38 -- target/rpc.sh@26 -- # stats='{ 00:10:38.898 "tick_rate": 2700000000, 00:10:38.898 "poll_groups": [ 00:10:38.898 { 00:10:38.898 "name": "nvmf_tgt_poll_group_0", 00:10:38.898 "admin_qpairs": 0, 00:10:38.898 "io_qpairs": 0, 00:10:38.898 "current_admin_qpairs": 0, 00:10:38.898 "current_io_qpairs": 0, 00:10:38.898 "pending_bdev_io": 0, 00:10:38.898 "completed_nvme_io": 0, 00:10:38.898 "transports": [] 00:10:38.898 }, 00:10:38.898 { 00:10:38.898 "name": "nvmf_tgt_poll_group_1", 00:10:38.898 "admin_qpairs": 0, 00:10:38.898 "io_qpairs": 0, 00:10:38.898 "current_admin_qpairs": 0, 00:10:38.898 "current_io_qpairs": 0, 00:10:38.898 "pending_bdev_io": 0, 00:10:38.898 "completed_nvme_io": 0, 00:10:38.898 "transports": [] 00:10:38.898 }, 00:10:38.898 { 00:10:38.898 "name": "nvmf_tgt_poll_group_2", 00:10:38.898 "admin_qpairs": 0, 00:10:38.898 "io_qpairs": 0, 00:10:38.898 "current_admin_qpairs": 0, 00:10:38.898 "current_io_qpairs": 0, 00:10:38.898 "pending_bdev_io": 0, 00:10:38.898 "completed_nvme_io": 0, 00:10:38.898 "transports": [] 00:10:38.898 }, 00:10:38.898 { 00:10:38.898 "name": "nvmf_tgt_poll_group_3", 00:10:38.898 "admin_qpairs": 0, 00:10:38.898 "io_qpairs": 0, 00:10:38.898 "current_admin_qpairs": 0, 00:10:38.898 "current_io_qpairs": 0, 00:10:38.898 "pending_bdev_io": 0, 00:10:38.898 "completed_nvme_io": 0, 00:10:38.898 "transports": [] 00:10:38.898 } 00:10:38.898 ] 00:10:38.898 }' 00:10:38.898 14:47:38 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:38.898 14:47:38 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:38.898 14:47:38 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:38.898 14:47:38 -- target/rpc.sh@15 -- # wc -l 00:10:38.898 14:47:38 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:38.899 14:47:38 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:38.899 14:47:38 -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:38.899 14:47:38 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:38.899 14:47:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.899 14:47:38 -- common/autotest_common.sh@10 -- # set +x 00:10:38.899 [2024-04-26 14:47:38.888069] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7faf4a82f940) succeed. 00:10:38.899 [2024-04-26 14:47:38.898962] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7faf4a7eb940) succeed. 00:10:39.176 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.176 14:47:39 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:39.176 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.176 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.176 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.176 14:47:39 -- target/rpc.sh@33 -- # stats='{ 00:10:39.176 "tick_rate": 2700000000, 00:10:39.176 "poll_groups": [ 00:10:39.176 { 00:10:39.176 "name": "nvmf_tgt_poll_group_0", 00:10:39.176 "admin_qpairs": 0, 00:10:39.176 "io_qpairs": 0, 00:10:39.176 "current_admin_qpairs": 0, 00:10:39.176 "current_io_qpairs": 0, 00:10:39.176 "pending_bdev_io": 0, 00:10:39.176 "completed_nvme_io": 0, 00:10:39.176 "transports": [ 00:10:39.176 { 00:10:39.176 "trtype": "RDMA", 00:10:39.176 "pending_data_buffer": 0, 00:10:39.176 "devices": [ 00:10:39.176 { 00:10:39.176 "name": "mlx5_0", 00:10:39.176 "polls": 37651, 00:10:39.176 "idle_polls": 37651, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "mlx5_1", 00:10:39.176 "polls": 37651, 00:10:39.176 "idle_polls": 37651, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "nvmf_tgt_poll_group_1", 00:10:39.176 "admin_qpairs": 0, 00:10:39.176 "io_qpairs": 0, 00:10:39.176 "current_admin_qpairs": 0, 00:10:39.176 "current_io_qpairs": 0, 00:10:39.176 "pending_bdev_io": 0, 00:10:39.176 "completed_nvme_io": 0, 00:10:39.176 "transports": [ 00:10:39.176 { 00:10:39.176 "trtype": "RDMA", 00:10:39.176 "pending_data_buffer": 0, 00:10:39.176 "devices": [ 00:10:39.176 { 00:10:39.176 "name": "mlx5_0", 00:10:39.176 "polls": 25317, 00:10:39.176 "idle_polls": 25317, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "mlx5_1", 00:10:39.176 "polls": 25317, 00:10:39.176 "idle_polls": 25317, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "nvmf_tgt_poll_group_2", 00:10:39.176 "admin_qpairs": 0, 00:10:39.176 "io_qpairs": 0, 00:10:39.176 "current_admin_qpairs": 0, 00:10:39.176 "current_io_qpairs": 0, 00:10:39.176 "pending_bdev_io": 0, 00:10:39.176 "completed_nvme_io": 0, 00:10:39.176 "transports": [ 00:10:39.176 { 00:10:39.176 "trtype": "RDMA", 00:10:39.176 "pending_data_buffer": 0, 00:10:39.176 "devices": [ 00:10:39.176 { 00:10:39.176 "name": "mlx5_0", 00:10:39.176 "polls": 13186, 00:10:39.176 "idle_polls": 13186, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "mlx5_1", 00:10:39.176 "polls": 13186, 00:10:39.176 "idle_polls": 13186, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "nvmf_tgt_poll_group_3", 00:10:39.176 "admin_qpairs": 0, 00:10:39.176 "io_qpairs": 0, 00:10:39.176 "current_admin_qpairs": 0, 00:10:39.176 "current_io_qpairs": 0, 00:10:39.176 "pending_bdev_io": 0, 00:10:39.176 "completed_nvme_io": 0, 00:10:39.176 "transports": [ 00:10:39.176 { 00:10:39.176 "trtype": "RDMA", 00:10:39.176 "pending_data_buffer": 0, 00:10:39.176 "devices": [ 00:10:39.176 { 00:10:39.176 "name": "mlx5_0", 00:10:39.176 "polls": 828, 00:10:39.176 "idle_polls": 828, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 }, 00:10:39.176 { 00:10:39.176 "name": "mlx5_1", 00:10:39.176 "polls": 828, 00:10:39.176 "idle_polls": 828, 00:10:39.176 "completions": 0, 00:10:39.176 "requests": 0, 00:10:39.176 "request_latency": 0, 00:10:39.176 "pending_free_request": 0, 00:10:39.176 "pending_rdma_read": 0, 00:10:39.176 "pending_rdma_write": 0, 00:10:39.176 "pending_rdma_send": 0, 00:10:39.176 "total_send_wrs": 0, 00:10:39.176 "send_doorbell_updates": 0, 00:10:39.176 "total_recv_wrs": 4096, 00:10:39.176 "recv_doorbell_updates": 1 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 } 00:10:39.176 ] 00:10:39.176 }' 00:10:39.177 14:47:39 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:39.177 14:47:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:39.177 14:47:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:39.177 14:47:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.434 14:47:39 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:39.434 14:47:39 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:39.435 14:47:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:39.435 14:47:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:39.435 14:47:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.435 14:47:39 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:39.435 14:47:39 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:39.435 14:47:39 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:39.435 14:47:39 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:39.435 14:47:39 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:39.435 14:47:39 -- target/rpc.sh@15 -- # wc -l 00:10:39.435 14:47:39 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:39.435 14:47:39 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:39.435 14:47:39 -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:39.435 14:47:39 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:39.435 14:47:39 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:39.435 14:47:39 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:39.435 14:47:39 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:39.435 14:47:39 -- target/rpc.sh@15 -- # wc -l 00:10:39.435 14:47:39 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:39.435 14:47:39 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:39.435 14:47:39 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:39.435 14:47:39 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:39.435 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.435 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.435 Malloc1 00:10:39.435 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.435 14:47:39 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.435 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.435 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.435 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.435 14:47:39 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.435 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.435 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.693 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.693 14:47:39 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:39.693 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.693 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.693 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.693 14:47:39 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:39.693 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.693 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.693 [2024-04-26 14:47:39.531714] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:39.693 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.693 14:47:39 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 192.168.100.8 -s 4420 00:10:39.693 14:47:39 -- common/autotest_common.sh@638 -- # local es=0 00:10:39.693 14:47:39 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 192.168.100.8 -s 4420 00:10:39.693 14:47:39 -- common/autotest_common.sh@626 -- # local arg=nvme 00:10:39.693 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:39.693 14:47:39 -- common/autotest_common.sh@630 -- # type -t nvme 00:10:39.693 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:39.693 14:47:39 -- common/autotest_common.sh@632 -- # type -P nvme 00:10:39.693 14:47:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:39.693 14:47:39 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:10:39.693 14:47:39 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:10:39.693 14:47:39 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd -a 192.168.100.8 -s 4420 00:10:39.693 [2024-04-26 14:47:39.571855] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:10:39.693 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:39.693 could not add new controller: failed to write to nvme-fabrics device 00:10:39.693 14:47:39 -- common/autotest_common.sh@641 -- # es=1 00:10:39.693 14:47:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:39.693 14:47:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:39.693 14:47:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:39.693 14:47:39 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:39.693 14:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.693 14:47:39 -- common/autotest_common.sh@10 -- # set +x 00:10:39.693 14:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.693 14:47:39 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:43.869 14:47:43 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.869 14:47:43 -- common/autotest_common.sh@1184 -- # local i=0 00:10:43.869 14:47:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.869 14:47:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:43.869 14:47:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:45.243 14:47:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:45.243 14:47:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:45.243 14:47:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.243 14:47:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:45.243 14:47:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.243 14:47:45 -- common/autotest_common.sh@1194 -- # return 0 00:10:45.243 14:47:45 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.764 14:47:47 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.764 14:47:47 -- common/autotest_common.sh@1205 -- # local i=0 00:10:47.764 14:47:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:47.764 14:47:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.764 14:47:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:47.764 14:47:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.764 14:47:47 -- common/autotest_common.sh@1217 -- # return 0 00:10:47.764 14:47:47 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:10:47.764 14:47:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.764 14:47:47 -- common/autotest_common.sh@10 -- # set +x 00:10:47.764 14:47:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.764 14:47:47 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:47.764 14:47:47 -- common/autotest_common.sh@638 -- # local es=0 00:10:47.764 14:47:47 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:47.764 14:47:47 -- common/autotest_common.sh@626 -- # local arg=nvme 00:10:47.764 14:47:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:47.764 14:47:47 -- common/autotest_common.sh@630 -- # type -t nvme 00:10:47.764 14:47:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:47.764 14:47:47 -- common/autotest_common.sh@632 -- # type -P nvme 00:10:47.764 14:47:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:47.764 14:47:47 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:10:47.764 14:47:47 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:10:47.764 14:47:47 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:47.764 [2024-04-26 14:47:47.615917] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd' 00:10:47.764 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:47.764 could not add new controller: failed to write to nvme-fabrics device 00:10:47.764 14:47:47 -- common/autotest_common.sh@641 -- # es=1 00:10:47.764 14:47:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:47.764 14:47:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:47.764 14:47:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:47.764 14:47:47 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:47.764 14:47:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.764 14:47:47 -- common/autotest_common.sh@10 -- # set +x 00:10:47.764 14:47:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.764 14:47:47 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:51.045 14:47:50 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.045 14:47:50 -- common/autotest_common.sh@1184 -- # local i=0 00:10:51.045 14:47:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.045 14:47:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:51.045 14:47:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:52.944 14:47:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:52.944 14:47:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:52.944 14:47:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.944 14:47:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:52.944 14:47:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.944 14:47:53 -- common/autotest_common.sh@1194 -- # return 0 00:10:52.944 14:47:53 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.468 14:47:55 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.468 14:47:55 -- common/autotest_common.sh@1205 -- # local i=0 00:10:55.468 14:47:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:55.468 14:47:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.468 14:47:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:55.468 14:47:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.468 14:47:55 -- common/autotest_common.sh@1217 -- # return 0 00:10:55.468 14:47:55 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.468 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.468 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.468 14:47:55 -- target/rpc.sh@81 -- # seq 1 5 00:10:55.468 14:47:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:55.468 14:47:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:55.468 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.468 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.468 14:47:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:55.468 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.468 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 [2024-04-26 14:47:55.458965] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.468 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.468 14:47:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:55.468 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.468 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.468 14:47:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:55.468 14:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.468 14:47:55 -- common/autotest_common.sh@10 -- # set +x 00:10:55.468 14:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.468 14:47:55 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:59.704 14:47:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.704 14:47:59 -- common/autotest_common.sh@1184 -- # local i=0 00:10:59.704 14:47:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.704 14:47:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:59.704 14:47:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:01.077 14:48:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:01.077 14:48:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:01.077 14:48:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.077 14:48:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:01.077 14:48:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.077 14:48:01 -- common/autotest_common.sh@1194 -- # return 0 00:11:01.077 14:48:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.608 14:48:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.608 14:48:03 -- common/autotest_common.sh@1205 -- # local i=0 00:11:03.608 14:48:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:03.608 14:48:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.608 14:48:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:03.608 14:48:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.608 14:48:03 -- common/autotest_common.sh@1217 -- # return 0 00:11:03.608 14:48:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:03.608 14:48:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 [2024-04-26 14:48:03.327341] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:03.608 14:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.608 14:48:03 -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 14:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.608 14:48:03 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:06.889 14:48:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.889 14:48:06 -- common/autotest_common.sh@1184 -- # local i=0 00:11:06.889 14:48:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.889 14:48:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:06.889 14:48:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:08.787 14:48:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:08.787 14:48:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:08.787 14:48:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.787 14:48:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:08.787 14:48:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.787 14:48:08 -- common/autotest_common.sh@1194 -- # return 0 00:11:08.787 14:48:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.317 14:48:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.317 14:48:11 -- common/autotest_common.sh@1205 -- # local i=0 00:11:11.317 14:48:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:11.317 14:48:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.317 14:48:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:11.317 14:48:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.317 14:48:11 -- common/autotest_common.sh@1217 -- # return 0 00:11:11.317 14:48:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.317 14:48:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 [2024-04-26 14:48:11.170023] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.317 14:48:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.317 14:48:11 -- common/autotest_common.sh@10 -- # set +x 00:11:11.317 14:48:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.317 14:48:11 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:14.596 14:48:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.596 14:48:14 -- common/autotest_common.sh@1184 -- # local i=0 00:11:14.596 14:48:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.596 14:48:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:14.596 14:48:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:16.496 14:48:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:16.496 14:48:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:16.496 14:48:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.496 14:48:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:16.496 14:48:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.496 14:48:16 -- common/autotest_common.sh@1194 -- # return 0 00:11:16.496 14:48:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.024 14:48:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.024 14:48:18 -- common/autotest_common.sh@1205 -- # local i=0 00:11:19.024 14:48:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:19.024 14:48:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.024 14:48:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:19.024 14:48:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.024 14:48:18 -- common/autotest_common.sh@1217 -- # return 0 00:11:19.024 14:48:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:19.024 14:48:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 [2024-04-26 14:48:18.908413] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.024 14:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.024 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:11:19.024 14:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.024 14:48:18 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:22.302 14:48:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.302 14:48:22 -- common/autotest_common.sh@1184 -- # local i=0 00:11:22.302 14:48:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.302 14:48:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:22.302 14:48:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:24.828 14:48:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:24.828 14:48:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:24.828 14:48:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.828 14:48:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:24.828 14:48:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.828 14:48:24 -- common/autotest_common.sh@1194 -- # return 0 00:11:24.828 14:48:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.726 14:48:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.726 14:48:26 -- common/autotest_common.sh@1205 -- # local i=0 00:11:26.726 14:48:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:26.726 14:48:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.726 14:48:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:26.726 14:48:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.726 14:48:26 -- common/autotest_common.sh@1217 -- # return 0 00:11:26.726 14:48:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:26.726 14:48:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 [2024-04-26 14:48:26.664012] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.726 14:48:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.726 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:11:26.726 14:48:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.726 14:48:26 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:30.907 14:48:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.907 14:48:30 -- common/autotest_common.sh@1184 -- # local i=0 00:11:30.907 14:48:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.907 14:48:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:30.907 14:48:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:32.280 14:48:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:32.280 14:48:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:32.280 14:48:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.280 14:48:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:32.280 14:48:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.280 14:48:32 -- common/autotest_common.sh@1194 -- # return 0 00:11:32.280 14:48:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.808 14:48:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@1205 -- # local i=0 00:11:34.808 14:48:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:34.808 14:48:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:34.808 14:48:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@1217 -- # return 0 00:11:34.808 14:48:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@99 -- # seq 1 5 00:11:34.808 14:48:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.808 14:48:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 [2024-04-26 14:48:34.521846] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.808 14:48:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 [2024-04-26 14:48:34.574860] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.808 14:48:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 [2024-04-26 14:48:34.627736] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.808 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.808 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.808 14:48:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.808 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.809 14:48:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 [2024-04-26 14:48:34.680502] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.809 14:48:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 [2024-04-26 14:48:34.733359] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:34.809 14:48:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.809 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:11:34.809 14:48:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.809 14:48:34 -- target/rpc.sh@110 -- # stats='{ 00:11:34.809 "tick_rate": 2700000000, 00:11:34.809 "poll_groups": [ 00:11:34.809 { 00:11:34.809 "name": "nvmf_tgt_poll_group_0", 00:11:34.809 "admin_qpairs": 2, 00:11:34.809 "io_qpairs": 27, 00:11:34.809 "current_admin_qpairs": 0, 00:11:34.809 "current_io_qpairs": 0, 00:11:34.809 "pending_bdev_io": 0, 00:11:34.809 "completed_nvme_io": 29, 00:11:34.809 "transports": [ 00:11:34.809 { 00:11:34.809 "trtype": "RDMA", 00:11:34.809 "pending_data_buffer": 0, 00:11:34.809 "devices": [ 00:11:34.809 { 00:11:34.809 "name": "mlx5_0", 00:11:34.809 "polls": 6692463, 00:11:34.809 "idle_polls": 6692272, 00:11:34.809 "completions": 193, 00:11:34.809 "requests": 96, 00:11:34.809 "request_latency": 23958777, 00:11:34.809 "pending_free_request": 0, 00:11:34.809 "pending_rdma_read": 0, 00:11:34.809 "pending_rdma_write": 0, 00:11:34.809 "pending_rdma_send": 0, 00:11:34.809 "total_send_wrs": 133, 00:11:34.809 "send_doorbell_updates": 97, 00:11:34.809 "total_recv_wrs": 4192, 00:11:34.809 "recv_doorbell_updates": 97 00:11:34.809 }, 00:11:34.809 { 00:11:34.809 "name": "mlx5_1", 00:11:34.809 "polls": 6692463, 00:11:34.809 "idle_polls": 6692463, 00:11:34.809 "completions": 0, 00:11:34.809 "requests": 0, 00:11:34.809 "request_latency": 0, 00:11:34.809 "pending_free_request": 0, 00:11:34.809 "pending_rdma_read": 0, 00:11:34.809 "pending_rdma_write": 0, 00:11:34.809 "pending_rdma_send": 0, 00:11:34.809 "total_send_wrs": 0, 00:11:34.809 "send_doorbell_updates": 0, 00:11:34.809 "total_recv_wrs": 4096, 00:11:34.809 "recv_doorbell_updates": 1 00:11:34.809 } 00:11:34.809 ] 00:11:34.809 } 00:11:34.809 ] 00:11:34.809 }, 00:11:34.809 { 00:11:34.809 "name": "nvmf_tgt_poll_group_1", 00:11:34.809 "admin_qpairs": 2, 00:11:34.809 "io_qpairs": 26, 00:11:34.809 "current_admin_qpairs": 0, 00:11:34.809 "current_io_qpairs": 0, 00:11:34.809 "pending_bdev_io": 0, 00:11:34.809 "completed_nvme_io": 173, 00:11:34.809 "transports": [ 00:11:34.809 { 00:11:34.809 "trtype": "RDMA", 00:11:34.809 "pending_data_buffer": 0, 00:11:34.809 "devices": [ 00:11:34.809 { 00:11:34.809 "name": "mlx5_0", 00:11:34.809 "polls": 6714886, 00:11:34.809 "idle_polls": 6714474, 00:11:34.809 "completions": 476, 00:11:34.809 "requests": 238, 00:11:34.809 "request_latency": 113953740, 00:11:34.809 "pending_free_request": 0, 00:11:34.809 "pending_rdma_read": 0, 00:11:34.809 "pending_rdma_write": 0, 00:11:34.809 "pending_rdma_send": 0, 00:11:34.809 "total_send_wrs": 418, 00:11:34.809 "send_doorbell_updates": 199, 00:11:34.809 "total_recv_wrs": 4334, 00:11:34.809 "recv_doorbell_updates": 200 00:11:34.809 }, 00:11:34.809 { 00:11:34.809 "name": "mlx5_1", 00:11:34.809 "polls": 6714886, 00:11:34.809 "idle_polls": 6714886, 00:11:34.809 "completions": 0, 00:11:34.809 "requests": 0, 00:11:34.809 "request_latency": 0, 00:11:34.809 "pending_free_request": 0, 00:11:34.809 "pending_rdma_read": 0, 00:11:34.809 "pending_rdma_write": 0, 00:11:34.809 "pending_rdma_send": 0, 00:11:34.809 "total_send_wrs": 0, 00:11:34.809 "send_doorbell_updates": 0, 00:11:34.809 "total_recv_wrs": 4096, 00:11:34.809 "recv_doorbell_updates": 1 00:11:34.809 } 00:11:34.809 ] 00:11:34.809 } 00:11:34.809 ] 00:11:34.809 }, 00:11:34.809 { 00:11:34.809 "name": "nvmf_tgt_poll_group_2", 00:11:34.809 "admin_qpairs": 1, 00:11:34.809 "io_qpairs": 26, 00:11:34.809 "current_admin_qpairs": 0, 00:11:34.809 "current_io_qpairs": 0, 00:11:34.809 "pending_bdev_io": 0, 00:11:34.809 "completed_nvme_io": 102, 00:11:34.809 "transports": [ 00:11:34.809 { 00:11:34.809 "trtype": "RDMA", 00:11:34.810 "pending_data_buffer": 0, 00:11:34.810 "devices": [ 00:11:34.810 { 00:11:34.810 "name": "mlx5_0", 00:11:34.810 "polls": 6799701, 00:11:34.810 "idle_polls": 6799452, 00:11:34.810 "completions": 269, 00:11:34.810 "requests": 134, 00:11:34.810 "request_latency": 52126032, 00:11:34.810 "pending_free_request": 0, 00:11:34.810 "pending_rdma_read": 0, 00:11:34.810 "pending_rdma_write": 0, 00:11:34.810 "pending_rdma_send": 0, 00:11:34.810 "total_send_wrs": 227, 00:11:34.810 "send_doorbell_updates": 122, 00:11:34.810 "total_recv_wrs": 4230, 00:11:34.810 "recv_doorbell_updates": 122 00:11:34.810 }, 00:11:34.810 { 00:11:34.810 "name": "mlx5_1", 00:11:34.810 "polls": 6799701, 00:11:34.810 "idle_polls": 6799701, 00:11:34.810 "completions": 0, 00:11:34.810 "requests": 0, 00:11:34.810 "request_latency": 0, 00:11:34.810 "pending_free_request": 0, 00:11:34.810 "pending_rdma_read": 0, 00:11:34.810 "pending_rdma_write": 0, 00:11:34.810 "pending_rdma_send": 0, 00:11:34.810 "total_send_wrs": 0, 00:11:34.810 "send_doorbell_updates": 0, 00:11:34.810 "total_recv_wrs": 4096, 00:11:34.810 "recv_doorbell_updates": 1 00:11:34.810 } 00:11:34.810 ] 00:11:34.810 } 00:11:34.810 ] 00:11:34.810 }, 00:11:34.810 { 00:11:34.810 "name": "nvmf_tgt_poll_group_3", 00:11:34.810 "admin_qpairs": 2, 00:11:34.810 "io_qpairs": 26, 00:11:34.810 "current_admin_qpairs": 0, 00:11:34.810 "current_io_qpairs": 0, 00:11:34.810 "pending_bdev_io": 0, 00:11:34.810 "completed_nvme_io": 151, 00:11:34.810 "transports": [ 00:11:34.810 { 00:11:34.810 "trtype": "RDMA", 00:11:34.810 "pending_data_buffer": 0, 00:11:34.810 "devices": [ 00:11:34.810 { 00:11:34.810 "name": "mlx5_0", 00:11:34.810 "polls": 5160795, 00:11:34.810 "idle_polls": 5160438, 00:11:34.810 "completions": 434, 00:11:34.810 "requests": 217, 00:11:34.810 "request_latency": 111867498, 00:11:34.810 "pending_free_request": 0, 00:11:34.810 "pending_rdma_read": 0, 00:11:34.810 "pending_rdma_write": 0, 00:11:34.810 "pending_rdma_send": 0, 00:11:34.810 "total_send_wrs": 376, 00:11:34.810 "send_doorbell_updates": 180, 00:11:34.810 "total_recv_wrs": 4313, 00:11:34.810 "recv_doorbell_updates": 181 00:11:34.810 }, 00:11:34.810 { 00:11:34.810 "name": "mlx5_1", 00:11:34.810 "polls": 5160795, 00:11:34.810 "idle_polls": 5160795, 00:11:34.810 "completions": 0, 00:11:34.810 "requests": 0, 00:11:34.810 "request_latency": 0, 00:11:34.810 "pending_free_request": 0, 00:11:34.810 "pending_rdma_read": 0, 00:11:34.810 "pending_rdma_write": 0, 00:11:34.810 "pending_rdma_send": 0, 00:11:34.810 "total_send_wrs": 0, 00:11:34.810 "send_doorbell_updates": 0, 00:11:34.810 "total_recv_wrs": 4096, 00:11:34.810 "recv_doorbell_updates": 1 00:11:34.810 } 00:11:34.810 ] 00:11:34.810 } 00:11:34.810 ] 00:11:34.810 } 00:11:34.810 ] 00:11:34.810 }' 00:11:34.810 14:48:34 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.810 14:48:34 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:34.810 14:48:34 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:34.810 14:48:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.068 14:48:34 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:35.068 14:48:34 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:35.068 14:48:34 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:35.068 14:48:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:35.068 14:48:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:35.068 14:48:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.068 14:48:34 -- target/rpc.sh@117 -- # (( 1372 > 0 )) 00:11:35.068 14:48:34 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:35.068 14:48:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:35.068 14:48:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:35.068 14:48:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.068 14:48:34 -- target/rpc.sh@118 -- # (( 301906047 > 0 )) 00:11:35.068 14:48:34 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:35.068 14:48:34 -- target/rpc.sh@123 -- # nvmftestfini 00:11:35.068 14:48:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:35.068 14:48:34 -- nvmf/common.sh@117 -- # sync 00:11:35.068 14:48:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:35.068 14:48:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:35.068 14:48:34 -- nvmf/common.sh@120 -- # set +e 00:11:35.068 14:48:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.068 14:48:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:35.068 rmmod nvme_rdma 00:11:35.068 rmmod nvme_fabrics 00:11:35.068 14:48:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.068 14:48:35 -- nvmf/common.sh@124 -- # set -e 00:11:35.068 14:48:35 -- nvmf/common.sh@125 -- # return 0 00:11:35.068 14:48:35 -- nvmf/common.sh@478 -- # '[' -n 155711 ']' 00:11:35.068 14:48:35 -- nvmf/common.sh@479 -- # killprocess 155711 00:11:35.068 14:48:35 -- common/autotest_common.sh@936 -- # '[' -z 155711 ']' 00:11:35.068 14:48:35 -- common/autotest_common.sh@940 -- # kill -0 155711 00:11:35.068 14:48:35 -- common/autotest_common.sh@941 -- # uname 00:11:35.068 14:48:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.068 14:48:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 155711 00:11:35.068 14:48:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:35.069 14:48:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:35.069 14:48:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 155711' 00:11:35.069 killing process with pid 155711 00:11:35.069 14:48:35 -- common/autotest_common.sh@955 -- # kill 155711 00:11:35.069 14:48:35 -- common/autotest_common.sh@960 -- # wait 155711 00:11:35.635 [2024-04-26 14:48:35.608845] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:37.536 14:48:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:37.536 14:48:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:37.536 00:11:37.536 real 1m1.407s 00:11:37.536 user 3m52.664s 00:11:37.536 sys 0m3.159s 00:11:37.536 14:48:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:37.536 14:48:37 -- common/autotest_common.sh@10 -- # set +x 00:11:37.536 ************************************ 00:11:37.536 END TEST nvmf_rpc 00:11:37.536 ************************************ 00:11:37.536 14:48:37 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:37.536 14:48:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:37.536 14:48:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.536 14:48:37 -- common/autotest_common.sh@10 -- # set +x 00:11:37.536 ************************************ 00:11:37.536 START TEST nvmf_invalid 00:11:37.536 ************************************ 00:11:37.536 14:48:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:37.536 * Looking for test storage... 00:11:37.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:37.536 14:48:37 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.536 14:48:37 -- nvmf/common.sh@7 -- # uname -s 00:11:37.536 14:48:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.536 14:48:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.536 14:48:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.536 14:48:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.536 14:48:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.536 14:48:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.536 14:48:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.536 14:48:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.536 14:48:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.536 14:48:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.536 14:48:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:37.536 14:48:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:37.536 14:48:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.536 14:48:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.536 14:48:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.536 14:48:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.536 14:48:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:37.536 14:48:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.536 14:48:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.536 14:48:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.537 14:48:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.537 14:48:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.537 14:48:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.537 14:48:37 -- paths/export.sh@5 -- # export PATH 00:11:37.537 14:48:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.537 14:48:37 -- nvmf/common.sh@47 -- # : 0 00:11:37.537 14:48:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.537 14:48:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.537 14:48:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.537 14:48:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.537 14:48:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.537 14:48:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.537 14:48:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.537 14:48:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.537 14:48:37 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:37.537 14:48:37 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:37.537 14:48:37 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:37.537 14:48:37 -- target/invalid.sh@14 -- # target=foobar 00:11:37.537 14:48:37 -- target/invalid.sh@16 -- # RANDOM=0 00:11:37.537 14:48:37 -- target/invalid.sh@34 -- # nvmftestinit 00:11:37.537 14:48:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:37.537 14:48:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.537 14:48:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:37.537 14:48:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:37.537 14:48:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:37.537 14:48:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.537 14:48:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.537 14:48:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.537 14:48:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:37.537 14:48:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:37.537 14:48:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.537 14:48:37 -- common/autotest_common.sh@10 -- # set +x 00:11:39.440 14:48:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:39.440 14:48:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.440 14:48:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.440 14:48:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.440 14:48:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.440 14:48:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.440 14:48:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.440 14:48:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.440 14:48:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.440 14:48:39 -- nvmf/common.sh@296 -- # e810=() 00:11:39.440 14:48:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.440 14:48:39 -- nvmf/common.sh@297 -- # x722=() 00:11:39.440 14:48:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.440 14:48:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:39.440 14:48:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.440 14:48:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.440 14:48:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.440 14:48:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.440 14:48:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:39.440 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:39.440 14:48:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.440 14:48:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.440 14:48:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:39.440 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:39.440 14:48:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.440 14:48:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.440 14:48:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.440 14:48:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.440 14:48:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:39.440 14:48:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.440 14:48:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:39.440 Found net devices under 0000:09:00.0: mlx_0_0 00:11:39.440 14:48:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.440 14:48:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.440 14:48:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:39.440 14:48:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.440 14:48:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:39.440 Found net devices under 0000:09:00.1: mlx_0_1 00:11:39.440 14:48:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.440 14:48:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:39.440 14:48:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:39.440 14:48:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:39.440 14:48:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:39.440 14:48:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:39.440 14:48:39 -- nvmf/common.sh@58 -- # uname 00:11:39.440 14:48:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:39.440 14:48:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:39.440 14:48:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:39.440 14:48:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:39.440 14:48:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:39.440 14:48:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:39.440 14:48:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:39.440 14:48:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:39.440 14:48:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:39.441 14:48:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:39.441 14:48:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:39.441 14:48:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.441 14:48:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.441 14:48:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.441 14:48:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.441 14:48:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.441 14:48:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@105 -- # continue 2 00:11:39.441 14:48:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@105 -- # continue 2 00:11:39.441 14:48:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.441 14:48:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.441 14:48:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:39.441 14:48:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:39.441 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.441 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:11:39.441 altname enp9s0f0np0 00:11:39.441 inet 192.168.100.8/24 scope global mlx_0_0 00:11:39.441 valid_lft forever preferred_lft forever 00:11:39.441 14:48:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.441 14:48:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.441 14:48:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:39.441 14:48:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:39.441 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.441 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:11:39.441 altname enp9s0f1np1 00:11:39.441 inet 192.168.100.9/24 scope global mlx_0_1 00:11:39.441 valid_lft forever preferred_lft forever 00:11:39.441 14:48:39 -- nvmf/common.sh@411 -- # return 0 00:11:39.441 14:48:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:39.441 14:48:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:39.441 14:48:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:39.441 14:48:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:39.441 14:48:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.441 14:48:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.441 14:48:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.441 14:48:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.441 14:48:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.441 14:48:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@105 -- # continue 2 00:11:39.441 14:48:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.441 14:48:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.441 14:48:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@105 -- # continue 2 00:11:39.441 14:48:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.441 14:48:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.441 14:48:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.441 14:48:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.441 14:48:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.441 14:48:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:39.441 192.168.100.9' 00:11:39.441 14:48:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:39.441 192.168.100.9' 00:11:39.441 14:48:39 -- nvmf/common.sh@446 -- # head -n 1 00:11:39.441 14:48:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:39.441 14:48:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:39.441 192.168.100.9' 00:11:39.441 14:48:39 -- nvmf/common.sh@447 -- # tail -n +2 00:11:39.441 14:48:39 -- nvmf/common.sh@447 -- # head -n 1 00:11:39.441 14:48:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:39.441 14:48:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:39.441 14:48:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:39.441 14:48:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:39.441 14:48:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:39.441 14:48:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:39.441 14:48:39 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:39.441 14:48:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:39.441 14:48:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:39.441 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:11:39.441 14:48:39 -- nvmf/common.sh@470 -- # nvmfpid=165190 00:11:39.441 14:48:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.441 14:48:39 -- nvmf/common.sh@471 -- # waitforlisten 165190 00:11:39.441 14:48:39 -- common/autotest_common.sh@817 -- # '[' -z 165190 ']' 00:11:39.441 14:48:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.441 14:48:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:39.441 14:48:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.441 14:48:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:39.441 14:48:39 -- common/autotest_common.sh@10 -- # set +x 00:11:39.700 [2024-04-26 14:48:39.531982] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:39.700 [2024-04-26 14:48:39.532136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.700 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.700 [2024-04-26 14:48:39.668201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.958 [2024-04-26 14:48:39.925728] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.958 [2024-04-26 14:48:39.925808] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.958 [2024-04-26 14:48:39.925837] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.958 [2024-04-26 14:48:39.925861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.958 [2024-04-26 14:48:39.925880] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.958 [2024-04-26 14:48:39.926019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.958 [2024-04-26 14:48:39.926082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.958 [2024-04-26 14:48:39.926141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.958 [2024-04-26 14:48:39.926154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.524 14:48:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:40.524 14:48:40 -- common/autotest_common.sh@850 -- # return 0 00:11:40.524 14:48:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:40.524 14:48:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:40.524 14:48:40 -- common/autotest_common.sh@10 -- # set +x 00:11:40.524 14:48:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.524 14:48:40 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:40.524 14:48:40 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27326 00:11:40.781 [2024-04-26 14:48:40.779883] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:40.781 14:48:40 -- target/invalid.sh@40 -- # out='request: 00:11:40.781 { 00:11:40.781 "nqn": "nqn.2016-06.io.spdk:cnode27326", 00:11:40.781 "tgt_name": "foobar", 00:11:40.781 "method": "nvmf_create_subsystem", 00:11:40.781 "req_id": 1 00:11:40.781 } 00:11:40.781 Got JSON-RPC error response 00:11:40.781 response: 00:11:40.781 { 00:11:40.781 "code": -32603, 00:11:40.781 "message": "Unable to find target foobar" 00:11:40.781 }' 00:11:40.781 14:48:40 -- target/invalid.sh@41 -- # [[ request: 00:11:40.781 { 00:11:40.781 "nqn": "nqn.2016-06.io.spdk:cnode27326", 00:11:40.781 "tgt_name": "foobar", 00:11:40.781 "method": "nvmf_create_subsystem", 00:11:40.781 "req_id": 1 00:11:40.781 } 00:11:40.781 Got JSON-RPC error response 00:11:40.781 response: 00:11:40.781 { 00:11:40.781 "code": -32603, 00:11:40.781 "message": "Unable to find target foobar" 00:11:40.781 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:40.781 14:48:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:40.781 14:48:40 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27393 00:11:41.039 [2024-04-26 14:48:41.024836] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27393: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:41.039 14:48:41 -- target/invalid.sh@45 -- # out='request: 00:11:41.039 { 00:11:41.039 "nqn": "nqn.2016-06.io.spdk:cnode27393", 00:11:41.039 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:41.039 "method": "nvmf_create_subsystem", 00:11:41.039 "req_id": 1 00:11:41.039 } 00:11:41.039 Got JSON-RPC error response 00:11:41.039 response: 00:11:41.039 { 00:11:41.039 "code": -32602, 00:11:41.039 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:41.039 }' 00:11:41.039 14:48:41 -- target/invalid.sh@46 -- # [[ request: 00:11:41.039 { 00:11:41.039 "nqn": "nqn.2016-06.io.spdk:cnode27393", 00:11:41.039 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:41.039 "method": "nvmf_create_subsystem", 00:11:41.039 "req_id": 1 00:11:41.039 } 00:11:41.039 Got JSON-RPC error response 00:11:41.039 response: 00:11:41.039 { 00:11:41.039 "code": -32602, 00:11:41.039 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:41.039 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:41.039 14:48:41 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:41.039 14:48:41 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7800 00:11:41.297 [2024-04-26 14:48:41.261586] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7800: invalid model number 'SPDK_Controller' 00:11:41.297 14:48:41 -- target/invalid.sh@50 -- # out='request: 00:11:41.297 { 00:11:41.297 "nqn": "nqn.2016-06.io.spdk:cnode7800", 00:11:41.297 "model_number": "SPDK_Controller\u001f", 00:11:41.297 "method": "nvmf_create_subsystem", 00:11:41.297 "req_id": 1 00:11:41.297 } 00:11:41.297 Got JSON-RPC error response 00:11:41.297 response: 00:11:41.297 { 00:11:41.297 "code": -32602, 00:11:41.297 "message": "Invalid MN SPDK_Controller\u001f" 00:11:41.297 }' 00:11:41.297 14:48:41 -- target/invalid.sh@51 -- # [[ request: 00:11:41.297 { 00:11:41.297 "nqn": "nqn.2016-06.io.spdk:cnode7800", 00:11:41.297 "model_number": "SPDK_Controller\u001f", 00:11:41.297 "method": "nvmf_create_subsystem", 00:11:41.297 "req_id": 1 00:11:41.297 } 00:11:41.297 Got JSON-RPC error response 00:11:41.297 response: 00:11:41.297 { 00:11:41.297 "code": -32602, 00:11:41.297 "message": "Invalid MN SPDK_Controller\u001f" 00:11:41.297 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:41.297 14:48:41 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:41.297 14:48:41 -- target/invalid.sh@19 -- # local length=21 ll 00:11:41.297 14:48:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.297 14:48:41 -- target/invalid.sh@21 -- # local chars 00:11:41.297 14:48:41 -- target/invalid.sh@22 -- # local string 00:11:41.297 14:48:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.297 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # printf %x 101 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # string+=e 00:11:41.297 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.297 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # printf %x 63 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:41.297 14:48:41 -- target/invalid.sh@25 -- # string+='?' 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 124 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+='|' 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 102 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=f 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 56 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=8 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 66 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=B 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 78 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=N 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 54 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=6 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 80 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=P 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 69 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=E 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 56 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=8 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 53 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=5 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 85 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=U 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 46 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=. 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 85 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=U 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 57 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=9 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 107 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=k 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 69 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=E 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 83 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=S 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 115 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=s 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # printf %x 55 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:41.298 14:48:41 -- target/invalid.sh@25 -- # string+=7 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.298 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.298 14:48:41 -- target/invalid.sh@28 -- # [[ e == \- ]] 00:11:41.298 14:48:41 -- target/invalid.sh@31 -- # echo 'e?|f8BN6PE85U.U9kESs7' 00:11:41.298 14:48:41 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'e?|f8BN6PE85U.U9kESs7' nqn.2016-06.io.spdk:cnode736 00:11:41.556 [2024-04-26 14:48:41.578643] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode736: invalid serial number 'e?|f8BN6PE85U.U9kESs7' 00:11:41.556 14:48:41 -- target/invalid.sh@54 -- # out='request: 00:11:41.556 { 00:11:41.556 "nqn": "nqn.2016-06.io.spdk:cnode736", 00:11:41.556 "serial_number": "e?|f8BN6PE85U.U9kESs7", 00:11:41.556 "method": "nvmf_create_subsystem", 00:11:41.556 "req_id": 1 00:11:41.556 } 00:11:41.556 Got JSON-RPC error response 00:11:41.556 response: 00:11:41.556 { 00:11:41.556 "code": -32602, 00:11:41.556 "message": "Invalid SN e?|f8BN6PE85U.U9kESs7" 00:11:41.556 }' 00:11:41.556 14:48:41 -- target/invalid.sh@55 -- # [[ request: 00:11:41.556 { 00:11:41.556 "nqn": "nqn.2016-06.io.spdk:cnode736", 00:11:41.556 "serial_number": "e?|f8BN6PE85U.U9kESs7", 00:11:41.556 "method": "nvmf_create_subsystem", 00:11:41.556 "req_id": 1 00:11:41.556 } 00:11:41.556 Got JSON-RPC error response 00:11:41.556 response: 00:11:41.556 { 00:11:41.556 "code": -32602, 00:11:41.556 "message": "Invalid SN e?|f8BN6PE85U.U9kESs7" 00:11:41.556 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:41.556 14:48:41 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:41.556 14:48:41 -- target/invalid.sh@19 -- # local length=41 ll 00:11:41.556 14:48:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.556 14:48:41 -- target/invalid.sh@21 -- # local chars 00:11:41.556 14:48:41 -- target/invalid.sh@22 -- # local string 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 82 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=R 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 121 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=y 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 117 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=u 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 123 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+='{' 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 70 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=F 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 47 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=/ 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # printf %x 84 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:41.556 14:48:41 -- target/invalid.sh@25 -- # string+=T 00:11:41.556 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.557 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # printf %x 120 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # string+=x 00:11:41.557 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.557 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # printf %x 38 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:41.557 14:48:41 -- target/invalid.sh@25 -- # string+='&' 00:11:41.557 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.557 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # printf %x 122 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # string+=z 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # printf %x 71 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # string+=G 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # printf %x 125 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # string+='}' 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.814 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # printf %x 79 00:11:41.814 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=O 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 113 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=q 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 38 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='&' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 34 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='"' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 41 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=')' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 42 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='*' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 37 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=% 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 40 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='(' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 98 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=b 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 58 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=: 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 104 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=h 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 101 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=e 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 55 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=7 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 70 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=F 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 34 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='"' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 45 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=- 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 117 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=u 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 71 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=G 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 76 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=L 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 41 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=')' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 74 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=J 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 104 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=h 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 120 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=x 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 76 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=L 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 34 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='"' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 117 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+=u 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # printf %x 92 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:41.815 14:48:41 -- target/invalid.sh@25 -- # string+='\' 00:11:41.815 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.816 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # printf %x 124 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # string+='|' 00:11:41.816 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.816 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # printf %x 94 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:41.816 14:48:41 -- target/invalid.sh@25 -- # string+='^' 00:11:41.816 14:48:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.816 14:48:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.816 14:48:41 -- target/invalid.sh@28 -- # [[ R == \- ]] 00:11:41.816 14:48:41 -- target/invalid.sh@31 -- # echo 'Ryu{F/Tx&zG}Oq&")*%(b:he7F"-uGL)JhxL"u\|^' 00:11:41.816 14:48:41 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Ryu{F/Tx&zG}Oq&")*%(b:he7F"-uGL)JhxL"u\|^' nqn.2016-06.io.spdk:cnode16456 00:11:42.073 [2024-04-26 14:48:41.980023] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16456: invalid model number 'Ryu{F/Tx&zG}Oq&")*%(b:he7F"-uGL)JhxL"u\|^' 00:11:42.073 14:48:42 -- target/invalid.sh@58 -- # out='request: 00:11:42.073 { 00:11:42.073 "nqn": "nqn.2016-06.io.spdk:cnode16456", 00:11:42.073 "model_number": "Ryu{F/Tx&zG}Oq&\")*%(b:he7F\"-uGL)JhxL\"u\\|^", 00:11:42.073 "method": "nvmf_create_subsystem", 00:11:42.073 "req_id": 1 00:11:42.073 } 00:11:42.073 Got JSON-RPC error response 00:11:42.073 response: 00:11:42.073 { 00:11:42.073 "code": -32602, 00:11:42.073 "message": "Invalid MN Ryu{F/Tx&zG}Oq&\")*%(b:he7F\"-uGL)JhxL\"u\\|^" 00:11:42.073 }' 00:11:42.073 14:48:42 -- target/invalid.sh@59 -- # [[ request: 00:11:42.073 { 00:11:42.073 "nqn": "nqn.2016-06.io.spdk:cnode16456", 00:11:42.073 "model_number": "Ryu{F/Tx&zG}Oq&\")*%(b:he7F\"-uGL)JhxL\"u\\|^", 00:11:42.073 "method": "nvmf_create_subsystem", 00:11:42.073 "req_id": 1 00:11:42.073 } 00:11:42.073 Got JSON-RPC error response 00:11:42.073 response: 00:11:42.073 { 00:11:42.073 "code": -32602, 00:11:42.073 "message": "Invalid MN Ryu{F/Tx&zG}Oq&\")*%(b:he7F\"-uGL)JhxL\"u\\|^" 00:11:42.073 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:42.073 14:48:42 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:11:42.330 [2024-04-26 14:48:42.246921] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7fbfdcc45940) succeed. 00:11:42.330 [2024-04-26 14:48:42.257819] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7fbfdcc01940) succeed. 00:11:42.587 14:48:42 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:42.845 14:48:42 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:11:42.845 14:48:42 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:11:42.845 192.168.100.9' 00:11:42.845 14:48:42 -- target/invalid.sh@67 -- # head -n 1 00:11:42.845 14:48:42 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:11:42.845 14:48:42 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:11:43.102 [2024-04-26 14:48:43.102290] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:43.102 14:48:43 -- target/invalid.sh@69 -- # out='request: 00:11:43.102 { 00:11:43.102 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.102 "listen_address": { 00:11:43.102 "trtype": "rdma", 00:11:43.102 "traddr": "192.168.100.8", 00:11:43.102 "trsvcid": "4421" 00:11:43.102 }, 00:11:43.102 "method": "nvmf_subsystem_remove_listener", 00:11:43.102 "req_id": 1 00:11:43.102 } 00:11:43.102 Got JSON-RPC error response 00:11:43.102 response: 00:11:43.102 { 00:11:43.102 "code": -32602, 00:11:43.102 "message": "Invalid parameters" 00:11:43.102 }' 00:11:43.102 14:48:43 -- target/invalid.sh@70 -- # [[ request: 00:11:43.102 { 00:11:43.102 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.102 "listen_address": { 00:11:43.102 "trtype": "rdma", 00:11:43.102 "traddr": "192.168.100.8", 00:11:43.102 "trsvcid": "4421" 00:11:43.102 }, 00:11:43.102 "method": "nvmf_subsystem_remove_listener", 00:11:43.102 "req_id": 1 00:11:43.102 } 00:11:43.102 Got JSON-RPC error response 00:11:43.102 response: 00:11:43.102 { 00:11:43.102 "code": -32602, 00:11:43.102 "message": "Invalid parameters" 00:11:43.102 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:43.102 14:48:43 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9602 -i 0 00:11:43.367 [2024-04-26 14:48:43.343049] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9602: invalid cntlid range [0-65519] 00:11:43.367 14:48:43 -- target/invalid.sh@73 -- # out='request: 00:11:43.367 { 00:11:43.367 "nqn": "nqn.2016-06.io.spdk:cnode9602", 00:11:43.367 "min_cntlid": 0, 00:11:43.367 "method": "nvmf_create_subsystem", 00:11:43.367 "req_id": 1 00:11:43.367 } 00:11:43.367 Got JSON-RPC error response 00:11:43.367 response: 00:11:43.367 { 00:11:43.367 "code": -32602, 00:11:43.367 "message": "Invalid cntlid range [0-65519]" 00:11:43.367 }' 00:11:43.367 14:48:43 -- target/invalid.sh@74 -- # [[ request: 00:11:43.367 { 00:11:43.367 "nqn": "nqn.2016-06.io.spdk:cnode9602", 00:11:43.367 "min_cntlid": 0, 00:11:43.367 "method": "nvmf_create_subsystem", 00:11:43.367 "req_id": 1 00:11:43.367 } 00:11:43.367 Got JSON-RPC error response 00:11:43.367 response: 00:11:43.367 { 00:11:43.367 "code": -32602, 00:11:43.367 "message": "Invalid cntlid range [0-65519]" 00:11:43.367 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.367 14:48:43 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17221 -i 65520 00:11:43.624 [2024-04-26 14:48:43.608058] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17221: invalid cntlid range [65520-65519] 00:11:43.624 14:48:43 -- target/invalid.sh@75 -- # out='request: 00:11:43.624 { 00:11:43.624 "nqn": "nqn.2016-06.io.spdk:cnode17221", 00:11:43.624 "min_cntlid": 65520, 00:11:43.624 "method": "nvmf_create_subsystem", 00:11:43.624 "req_id": 1 00:11:43.624 } 00:11:43.624 Got JSON-RPC error response 00:11:43.624 response: 00:11:43.624 { 00:11:43.624 "code": -32602, 00:11:43.624 "message": "Invalid cntlid range [65520-65519]" 00:11:43.624 }' 00:11:43.624 14:48:43 -- target/invalid.sh@76 -- # [[ request: 00:11:43.624 { 00:11:43.624 "nqn": "nqn.2016-06.io.spdk:cnode17221", 00:11:43.624 "min_cntlid": 65520, 00:11:43.624 "method": "nvmf_create_subsystem", 00:11:43.624 "req_id": 1 00:11:43.624 } 00:11:43.624 Got JSON-RPC error response 00:11:43.624 response: 00:11:43.624 { 00:11:43.624 "code": -32602, 00:11:43.624 "message": "Invalid cntlid range [65520-65519]" 00:11:43.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.624 14:48:43 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18070 -I 0 00:11:43.882 [2024-04-26 14:48:43.848917] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18070: invalid cntlid range [1-0] 00:11:43.882 14:48:43 -- target/invalid.sh@77 -- # out='request: 00:11:43.882 { 00:11:43.882 "nqn": "nqn.2016-06.io.spdk:cnode18070", 00:11:43.882 "max_cntlid": 0, 00:11:43.882 "method": "nvmf_create_subsystem", 00:11:43.882 "req_id": 1 00:11:43.882 } 00:11:43.882 Got JSON-RPC error response 00:11:43.882 response: 00:11:43.882 { 00:11:43.882 "code": -32602, 00:11:43.882 "message": "Invalid cntlid range [1-0]" 00:11:43.882 }' 00:11:43.882 14:48:43 -- target/invalid.sh@78 -- # [[ request: 00:11:43.882 { 00:11:43.882 "nqn": "nqn.2016-06.io.spdk:cnode18070", 00:11:43.882 "max_cntlid": 0, 00:11:43.882 "method": "nvmf_create_subsystem", 00:11:43.882 "req_id": 1 00:11:43.882 } 00:11:43.882 Got JSON-RPC error response 00:11:43.882 response: 00:11:43.882 { 00:11:43.882 "code": -32602, 00:11:43.882 "message": "Invalid cntlid range [1-0]" 00:11:43.882 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.882 14:48:43 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25752 -I 65520 00:11:44.139 [2024-04-26 14:48:44.113922] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25752: invalid cntlid range [1-65520] 00:11:44.139 14:48:44 -- target/invalid.sh@79 -- # out='request: 00:11:44.139 { 00:11:44.139 "nqn": "nqn.2016-06.io.spdk:cnode25752", 00:11:44.139 "max_cntlid": 65520, 00:11:44.139 "method": "nvmf_create_subsystem", 00:11:44.139 "req_id": 1 00:11:44.139 } 00:11:44.139 Got JSON-RPC error response 00:11:44.139 response: 00:11:44.139 { 00:11:44.139 "code": -32602, 00:11:44.139 "message": "Invalid cntlid range [1-65520]" 00:11:44.139 }' 00:11:44.139 14:48:44 -- target/invalid.sh@80 -- # [[ request: 00:11:44.139 { 00:11:44.139 "nqn": "nqn.2016-06.io.spdk:cnode25752", 00:11:44.139 "max_cntlid": 65520, 00:11:44.139 "method": "nvmf_create_subsystem", 00:11:44.139 "req_id": 1 00:11:44.139 } 00:11:44.139 Got JSON-RPC error response 00:11:44.139 response: 00:11:44.139 { 00:11:44.139 "code": -32602, 00:11:44.139 "message": "Invalid cntlid range [1-65520]" 00:11:44.139 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.139 14:48:44 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32544 -i 6 -I 5 00:11:44.396 [2024-04-26 14:48:44.354788] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32544: invalid cntlid range [6-5] 00:11:44.396 14:48:44 -- target/invalid.sh@83 -- # out='request: 00:11:44.396 { 00:11:44.396 "nqn": "nqn.2016-06.io.spdk:cnode32544", 00:11:44.396 "min_cntlid": 6, 00:11:44.396 "max_cntlid": 5, 00:11:44.396 "method": "nvmf_create_subsystem", 00:11:44.396 "req_id": 1 00:11:44.396 } 00:11:44.396 Got JSON-RPC error response 00:11:44.396 response: 00:11:44.396 { 00:11:44.396 "code": -32602, 00:11:44.396 "message": "Invalid cntlid range [6-5]" 00:11:44.396 }' 00:11:44.396 14:48:44 -- target/invalid.sh@84 -- # [[ request: 00:11:44.396 { 00:11:44.396 "nqn": "nqn.2016-06.io.spdk:cnode32544", 00:11:44.396 "min_cntlid": 6, 00:11:44.396 "max_cntlid": 5, 00:11:44.396 "method": "nvmf_create_subsystem", 00:11:44.396 "req_id": 1 00:11:44.396 } 00:11:44.396 Got JSON-RPC error response 00:11:44.396 response: 00:11:44.396 { 00:11:44.396 "code": -32602, 00:11:44.396 "message": "Invalid cntlid range [6-5]" 00:11:44.396 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.396 14:48:44 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:44.653 14:48:44 -- target/invalid.sh@87 -- # out='request: 00:11:44.654 { 00:11:44.654 "name": "foobar", 00:11:44.654 "method": "nvmf_delete_target", 00:11:44.654 "req_id": 1 00:11:44.654 } 00:11:44.654 Got JSON-RPC error response 00:11:44.654 response: 00:11:44.654 { 00:11:44.654 "code": -32602, 00:11:44.654 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:44.654 }' 00:11:44.654 14:48:44 -- target/invalid.sh@88 -- # [[ request: 00:11:44.654 { 00:11:44.654 "name": "foobar", 00:11:44.654 "method": "nvmf_delete_target", 00:11:44.654 "req_id": 1 00:11:44.654 } 00:11:44.654 Got JSON-RPC error response 00:11:44.654 response: 00:11:44.654 { 00:11:44.654 "code": -32602, 00:11:44.654 "message": "The specified target doesn't exist, cannot delete it." 00:11:44.654 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:44.654 14:48:44 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:44.654 14:48:44 -- target/invalid.sh@91 -- # nvmftestfini 00:11:44.654 14:48:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:44.654 14:48:44 -- nvmf/common.sh@117 -- # sync 00:11:44.654 14:48:44 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:44.654 14:48:44 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:44.654 14:48:44 -- nvmf/common.sh@120 -- # set +e 00:11:44.654 14:48:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.654 14:48:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:44.654 rmmod nvme_rdma 00:11:44.654 rmmod nvme_fabrics 00:11:44.654 14:48:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.654 14:48:44 -- nvmf/common.sh@124 -- # set -e 00:11:44.654 14:48:44 -- nvmf/common.sh@125 -- # return 0 00:11:44.654 14:48:44 -- nvmf/common.sh@478 -- # '[' -n 165190 ']' 00:11:44.654 14:48:44 -- nvmf/common.sh@479 -- # killprocess 165190 00:11:44.654 14:48:44 -- common/autotest_common.sh@936 -- # '[' -z 165190 ']' 00:11:44.654 14:48:44 -- common/autotest_common.sh@940 -- # kill -0 165190 00:11:44.654 14:48:44 -- common/autotest_common.sh@941 -- # uname 00:11:44.654 14:48:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.654 14:48:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 165190 00:11:44.654 14:48:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.654 14:48:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.654 14:48:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 165190' 00:11:44.654 killing process with pid 165190 00:11:44.654 14:48:44 -- common/autotest_common.sh@955 -- # kill 165190 00:11:44.654 14:48:44 -- common/autotest_common.sh@960 -- # wait 165190 00:11:45.219 [2024-04-26 14:48:45.075043] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:46.591 14:48:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:46.591 14:48:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:46.591 00:11:46.591 real 0m9.024s 00:11:46.591 user 0m27.435s 00:11:46.591 sys 0m2.682s 00:11:46.591 14:48:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.591 14:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:46.591 ************************************ 00:11:46.591 END TEST nvmf_invalid 00:11:46.591 ************************************ 00:11:46.591 14:48:46 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:46.591 14:48:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.591 14:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.591 14:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:46.591 ************************************ 00:11:46.591 START TEST nvmf_abort 00:11:46.592 ************************************ 00:11:46.592 14:48:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:11:46.592 * Looking for test storage... 00:11:46.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.592 14:48:46 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.592 14:48:46 -- nvmf/common.sh@7 -- # uname -s 00:11:46.592 14:48:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.592 14:48:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.592 14:48:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.592 14:48:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.592 14:48:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.592 14:48:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.592 14:48:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.592 14:48:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.592 14:48:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.592 14:48:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.592 14:48:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:46.592 14:48:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:46.592 14:48:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.592 14:48:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.592 14:48:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.592 14:48:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.592 14:48:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.592 14:48:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.592 14:48:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.592 14:48:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.592 14:48:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.592 14:48:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.592 14:48:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.592 14:48:46 -- paths/export.sh@5 -- # export PATH 00:11:46.592 14:48:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.592 14:48:46 -- nvmf/common.sh@47 -- # : 0 00:11:46.592 14:48:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.592 14:48:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.592 14:48:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.592 14:48:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.592 14:48:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.592 14:48:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.592 14:48:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.592 14:48:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.592 14:48:46 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.592 14:48:46 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:46.592 14:48:46 -- target/abort.sh@14 -- # nvmftestinit 00:11:46.592 14:48:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:46.592 14:48:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.592 14:48:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:46.592 14:48:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:46.592 14:48:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:46.592 14:48:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.592 14:48:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.592 14:48:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.592 14:48:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:46.592 14:48:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:46.592 14:48:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.592 14:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:48.496 14:48:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:48.496 14:48:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:48.496 14:48:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:48.496 14:48:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:48.496 14:48:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:48.496 14:48:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:48.496 14:48:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:48.496 14:48:48 -- nvmf/common.sh@295 -- # net_devs=() 00:11:48.496 14:48:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:48.496 14:48:48 -- nvmf/common.sh@296 -- # e810=() 00:11:48.496 14:48:48 -- nvmf/common.sh@296 -- # local -ga e810 00:11:48.496 14:48:48 -- nvmf/common.sh@297 -- # x722=() 00:11:48.496 14:48:48 -- nvmf/common.sh@297 -- # local -ga x722 00:11:48.496 14:48:48 -- nvmf/common.sh@298 -- # mlx=() 00:11:48.496 14:48:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:48.496 14:48:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.496 14:48:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:48.496 14:48:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:48.496 14:48:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:48.496 14:48:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:48.496 14:48:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:48.496 14:48:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.496 14:48:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:48.496 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:48.496 14:48:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.496 14:48:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.496 14:48:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:48.496 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:48.496 14:48:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.496 14:48:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:48.496 14:48:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:48.496 14:48:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.496 14:48:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.496 14:48:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:48.496 14:48:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.496 14:48:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:48.496 Found net devices under 0000:09:00.0: mlx_0_0 00:11:48.496 14:48:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.496 14:48:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.497 14:48:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:48.497 14:48:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.497 14:48:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:48.497 Found net devices under 0000:09:00.1: mlx_0_1 00:11:48.497 14:48:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.497 14:48:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:48.497 14:48:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:48.497 14:48:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:48.497 14:48:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:48.497 14:48:48 -- nvmf/common.sh@58 -- # uname 00:11:48.497 14:48:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:48.497 14:48:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:48.497 14:48:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:48.497 14:48:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:48.497 14:48:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:48.497 14:48:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:48.497 14:48:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:48.497 14:48:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:48.497 14:48:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:48.497 14:48:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:48.497 14:48:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:48.497 14:48:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.497 14:48:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.497 14:48:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.497 14:48:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.497 14:48:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.497 14:48:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:48.497 14:48:48 -- nvmf/common.sh@105 -- # continue 2 00:11:48.497 14:48:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.497 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:48.497 14:48:48 -- nvmf/common.sh@105 -- # continue 2 00:11:48.497 14:48:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.497 14:48:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:48.497 14:48:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.497 14:48:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:48.497 14:48:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:48.497 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.497 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:11:48.497 altname enp9s0f0np0 00:11:48.497 inet 192.168.100.8/24 scope global mlx_0_0 00:11:48.497 valid_lft forever preferred_lft forever 00:11:48.497 14:48:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.497 14:48:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:48.497 14:48:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.497 14:48:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.497 14:48:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:48.497 14:48:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:48.497 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.497 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:11:48.497 altname enp9s0f1np1 00:11:48.497 inet 192.168.100.9/24 scope global mlx_0_1 00:11:48.497 valid_lft forever preferred_lft forever 00:11:48.497 14:48:48 -- nvmf/common.sh@411 -- # return 0 00:11:48.497 14:48:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:48.497 14:48:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:48.497 14:48:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:48.497 14:48:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:48.497 14:48:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:48.497 14:48:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.497 14:48:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.497 14:48:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.497 14:48:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.756 14:48:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.756 14:48:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.756 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.756 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.756 14:48:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:48.756 14:48:48 -- nvmf/common.sh@105 -- # continue 2 00:11:48.756 14:48:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.756 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.756 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.756 14:48:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.756 14:48:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.756 14:48:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:48.756 14:48:48 -- nvmf/common.sh@105 -- # continue 2 00:11:48.756 14:48:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.756 14:48:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:48.756 14:48:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.756 14:48:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.756 14:48:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:48.756 14:48:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.756 14:48:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.756 14:48:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:48.756 192.168.100.9' 00:11:48.756 14:48:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:48.756 192.168.100.9' 00:11:48.756 14:48:48 -- nvmf/common.sh@446 -- # head -n 1 00:11:48.756 14:48:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:48.756 14:48:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:48.756 192.168.100.9' 00:11:48.756 14:48:48 -- nvmf/common.sh@447 -- # tail -n +2 00:11:48.756 14:48:48 -- nvmf/common.sh@447 -- # head -n 1 00:11:48.756 14:48:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:48.756 14:48:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:48.756 14:48:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:48.756 14:48:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:48.756 14:48:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:48.756 14:48:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:48.756 14:48:48 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:48.756 14:48:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:48.756 14:48:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:48.756 14:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:48.756 14:48:48 -- nvmf/common.sh@470 -- # nvmfpid=167952 00:11:48.756 14:48:48 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:48.756 14:48:48 -- nvmf/common.sh@471 -- # waitforlisten 167952 00:11:48.756 14:48:48 -- common/autotest_common.sh@817 -- # '[' -z 167952 ']' 00:11:48.756 14:48:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.756 14:48:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:48.756 14:48:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.756 14:48:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:48.756 14:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:48.756 [2024-04-26 14:48:48.710243] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:48.756 [2024-04-26 14:48:48.710379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.757 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.757 [2024-04-26 14:48:48.834926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.015 [2024-04-26 14:48:49.086368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.015 [2024-04-26 14:48:49.086448] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.015 [2024-04-26 14:48:49.086473] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.015 [2024-04-26 14:48:49.086495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.015 [2024-04-26 14:48:49.086513] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.015 [2024-04-26 14:48:49.086660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.015 [2024-04-26 14:48:49.086739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.015 [2024-04-26 14:48:49.086746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.948 14:48:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:49.948 14:48:49 -- common/autotest_common.sh@850 -- # return 0 00:11:49.948 14:48:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:49.948 14:48:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:49.948 14:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:49.948 14:48:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.948 14:48:49 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:11:49.948 14:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.948 14:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:49.948 [2024-04-26 14:48:49.717451] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7f57c795c940) succeed. 00:11:49.948 [2024-04-26 14:48:49.728236] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7f57c7915940) succeed. 00:11:49.948 14:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:49.948 14:48:49 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:49.948 14:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.948 14:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 Malloc0 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:50.207 14:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.207 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 Delay0 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:50.207 14:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.207 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:50.207 14:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.207 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:50.207 14:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.207 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 [2024-04-26 14:48:50.066532] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:50.207 14:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.207 14:48:50 -- common/autotest_common.sh@10 -- # set +x 00:11:50.207 14:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.207 14:48:50 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:50.207 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.207 [2024-04-26 14:48:50.207557] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:52.733 Initializing NVMe Controllers 00:11:52.733 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:52.733 controller IO queue size 128 less than required 00:11:52.733 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:52.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:52.733 Initialization complete. Launching workers. 00:11:52.733 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31549 00:11:52.733 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31610, failed to submit 62 00:11:52.733 success 31552, unsuccess 58, failed 0 00:11:52.733 14:48:52 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:52.733 14:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.733 14:48:52 -- common/autotest_common.sh@10 -- # set +x 00:11:52.733 14:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.733 14:48:52 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:52.733 14:48:52 -- target/abort.sh@38 -- # nvmftestfini 00:11:52.733 14:48:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:52.733 14:48:52 -- nvmf/common.sh@117 -- # sync 00:11:52.733 14:48:52 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:52.733 14:48:52 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:52.733 14:48:52 -- nvmf/common.sh@120 -- # set +e 00:11:52.733 14:48:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.733 14:48:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:52.733 rmmod nvme_rdma 00:11:52.733 rmmod nvme_fabrics 00:11:52.733 14:48:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.733 14:48:52 -- nvmf/common.sh@124 -- # set -e 00:11:52.733 14:48:52 -- nvmf/common.sh@125 -- # return 0 00:11:52.733 14:48:52 -- nvmf/common.sh@478 -- # '[' -n 167952 ']' 00:11:52.733 14:48:52 -- nvmf/common.sh@479 -- # killprocess 167952 00:11:52.733 14:48:52 -- common/autotest_common.sh@936 -- # '[' -z 167952 ']' 00:11:52.733 14:48:52 -- common/autotest_common.sh@940 -- # kill -0 167952 00:11:52.733 14:48:52 -- common/autotest_common.sh@941 -- # uname 00:11:52.733 14:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.733 14:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 167952 00:11:52.733 14:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:52.733 14:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:52.733 14:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 167952' 00:11:52.733 killing process with pid 167952 00:11:52.733 14:48:52 -- common/autotest_common.sh@955 -- # kill 167952 00:11:52.733 14:48:52 -- common/autotest_common.sh@960 -- # wait 167952 00:11:52.991 [2024-04-26 14:48:52.843797] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:54.364 14:48:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:54.364 14:48:54 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:54.364 00:11:54.364 real 0m7.832s 00:11:54.364 user 0m17.366s 00:11:54.364 sys 0m2.152s 00:11:54.364 14:48:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.364 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:11:54.364 ************************************ 00:11:54.364 END TEST nvmf_abort 00:11:54.365 ************************************ 00:11:54.365 14:48:54 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:54.365 14:48:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:54.365 14:48:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.365 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:11:54.365 ************************************ 00:11:54.365 START TEST nvmf_ns_hotplug_stress 00:11:54.365 ************************************ 00:11:54.365 14:48:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:54.365 * Looking for test storage... 00:11:54.365 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.365 14:48:54 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.365 14:48:54 -- nvmf/common.sh@7 -- # uname -s 00:11:54.365 14:48:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.365 14:48:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.365 14:48:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.365 14:48:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.365 14:48:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.365 14:48:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.365 14:48:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.365 14:48:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.365 14:48:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.365 14:48:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.365 14:48:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:11:54.365 14:48:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:11:54.365 14:48:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.365 14:48:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.365 14:48:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.365 14:48:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.365 14:48:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:54.365 14:48:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.365 14:48:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.365 14:48:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.365 14:48:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.365 14:48:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.365 14:48:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.365 14:48:54 -- paths/export.sh@5 -- # export PATH 00:11:54.365 14:48:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.365 14:48:54 -- nvmf/common.sh@47 -- # : 0 00:11:54.365 14:48:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.365 14:48:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.365 14:48:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.365 14:48:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.365 14:48:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.365 14:48:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.365 14:48:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.365 14:48:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.623 14:48:54 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:54.623 14:48:54 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:54.623 14:48:54 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:54.623 14:48:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.623 14:48:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:54.623 14:48:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:54.623 14:48:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:54.623 14:48:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.623 14:48:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.623 14:48:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.623 14:48:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:54.623 14:48:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:54.623 14:48:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.623 14:48:54 -- common/autotest_common.sh@10 -- # set +x 00:11:56.524 14:48:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:56.524 14:48:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.524 14:48:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.524 14:48:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.524 14:48:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.524 14:48:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.524 14:48:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.524 14:48:56 -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.524 14:48:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.524 14:48:56 -- nvmf/common.sh@296 -- # e810=() 00:11:56.524 14:48:56 -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.524 14:48:56 -- nvmf/common.sh@297 -- # x722=() 00:11:56.524 14:48:56 -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.524 14:48:56 -- nvmf/common.sh@298 -- # mlx=() 00:11:56.524 14:48:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.524 14:48:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.524 14:48:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:56.524 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:56.524 14:48:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.524 14:48:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:56.524 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:56.524 14:48:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.524 14:48:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.524 14:48:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.524 14:48:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:56.524 Found net devices under 0000:09:00.0: mlx_0_0 00:11:56.524 14:48:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.524 14:48:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.524 14:48:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:56.524 Found net devices under 0000:09:00.1: mlx_0_1 00:11:56.524 14:48:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.524 14:48:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:56.524 14:48:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:56.524 14:48:56 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:56.524 14:48:56 -- nvmf/common.sh@58 -- # uname 00:11:56.524 14:48:56 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:56.524 14:48:56 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:56.524 14:48:56 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:56.524 14:48:56 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:56.524 14:48:56 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:56.524 14:48:56 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:56.524 14:48:56 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:56.524 14:48:56 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:56.524 14:48:56 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:56.524 14:48:56 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:56.524 14:48:56 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:56.524 14:48:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.524 14:48:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:56.524 14:48:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:56.524 14:48:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.524 14:48:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:56.524 14:48:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:56.524 14:48:56 -- nvmf/common.sh@105 -- # continue 2 00:11:56.524 14:48:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.524 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:56.524 14:48:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:56.524 14:48:56 -- nvmf/common.sh@105 -- # continue 2 00:11:56.524 14:48:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:56.524 14:48:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:56.524 14:48:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:56.524 14:48:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.525 14:48:56 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:56.525 14:48:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:56.525 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:56.525 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:11:56.525 altname enp9s0f0np0 00:11:56.525 inet 192.168.100.8/24 scope global mlx_0_0 00:11:56.525 valid_lft forever preferred_lft forever 00:11:56.525 14:48:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:56.525 14:48:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.525 14:48:56 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:56.525 14:48:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:56.525 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:56.525 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:11:56.525 altname enp9s0f1np1 00:11:56.525 inet 192.168.100.9/24 scope global mlx_0_1 00:11:56.525 valid_lft forever preferred_lft forever 00:11:56.525 14:48:56 -- nvmf/common.sh@411 -- # return 0 00:11:56.525 14:48:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:56.525 14:48:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:56.525 14:48:56 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:56.525 14:48:56 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:56.525 14:48:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.525 14:48:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:56.525 14:48:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:56.525 14:48:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.525 14:48:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:56.525 14:48:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.525 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.525 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:56.525 14:48:56 -- nvmf/common.sh@105 -- # continue 2 00:11:56.525 14:48:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.525 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.525 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.525 14:48:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:56.525 14:48:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@105 -- # continue 2 00:11:56.525 14:48:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:56.525 14:48:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:56.525 14:48:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.525 14:48:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:56.525 14:48:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.525 14:48:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.525 14:48:56 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:56.525 192.168.100.9' 00:11:56.525 14:48:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:56.525 192.168.100.9' 00:11:56.525 14:48:56 -- nvmf/common.sh@446 -- # head -n 1 00:11:56.525 14:48:56 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:56.525 14:48:56 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:56.525 192.168.100.9' 00:11:56.525 14:48:56 -- nvmf/common.sh@447 -- # tail -n +2 00:11:56.525 14:48:56 -- nvmf/common.sh@447 -- # head -n 1 00:11:56.525 14:48:56 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:56.525 14:48:56 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:56.525 14:48:56 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:56.525 14:48:56 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:56.525 14:48:56 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:56.525 14:48:56 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:56.525 14:48:56 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:56.525 14:48:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:56.525 14:48:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:56.525 14:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:56.525 14:48:56 -- nvmf/common.sh@470 -- # nvmfpid=170311 00:11:56.525 14:48:56 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:56.525 14:48:56 -- nvmf/common.sh@471 -- # waitforlisten 170311 00:11:56.525 14:48:56 -- common/autotest_common.sh@817 -- # '[' -z 170311 ']' 00:11:56.525 14:48:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.525 14:48:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:56.525 14:48:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.525 14:48:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:56.525 14:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 [2024-04-26 14:48:56.664008] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:56.782 [2024-04-26 14:48:56.664158] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.782 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.782 [2024-04-26 14:48:56.790249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.039 [2024-04-26 14:48:57.038080] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.039 [2024-04-26 14:48:57.038198] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.039 [2024-04-26 14:48:57.038224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.039 [2024-04-26 14:48:57.038247] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.039 [2024-04-26 14:48:57.038277] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.039 [2024-04-26 14:48:57.038428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.040 [2024-04-26 14:48:57.038473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.040 [2024-04-26 14:48:57.038478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.604 14:48:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:57.604 14:48:57 -- common/autotest_common.sh@850 -- # return 0 00:11:57.604 14:48:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:57.604 14:48:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:57.604 14:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:57.604 14:48:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.604 14:48:57 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:57.604 14:48:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:57.862 [2024-04-26 14:48:57.847716] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7ff578d93940) succeed. 00:11:57.862 [2024-04-26 14:48:57.858366] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7ff578d4f940) succeed. 00:11:58.120 14:48:58 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:58.378 14:48:58 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:58.635 [2024-04-26 14:48:58.551838] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:58.635 14:48:58 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:58.893 14:48:58 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:59.151 Malloc0 00:11:59.151 14:48:59 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:59.409 Delay0 00:11:59.409 14:48:59 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.667 14:48:59 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:59.925 NULL1 00:11:59.925 14:48:59 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.182 14:49:00 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=170745 00:12:00.182 14:49:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:00.182 14:49:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:00.182 14:49:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.439 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.370 Read completed with error (sct=0, sc=11) 00:12:01.628 14:49:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:01.628 14:49:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:01.628 14:49:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:01.886 true 00:12:01.886 14:49:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:01.886 14:49:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 14:49:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.077 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.077 14:49:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:03.077 14:49:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:03.335 true 00:12:03.335 14:49:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:03.335 14:49:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.900 14:49:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.415 14:49:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:04.415 14:49:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:04.672 true 00:12:04.672 14:49:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:04.673 14:49:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.237 14:49:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.494 14:49:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:05.494 14:49:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:05.774 true 00:12:05.774 14:49:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:05.774 14:49:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.705 14:49:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.962 14:49:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:06.962 14:49:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:07.220 true 00:12:07.220 14:49:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:07.220 14:49:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.151 14:49:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:08.408 14:49:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:08.408 14:49:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:08.665 true 00:12:08.665 14:49:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:08.665 14:49:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.229 14:49:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.486 14:49:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:09.486 14:49:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:09.742 true 00:12:09.742 14:49:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:09.742 14:49:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.673 14:49:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.930 14:49:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:10.930 14:49:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:11.187 true 00:12:11.187 14:49:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:11.187 14:49:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.445 14:49:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.703 14:49:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:11.703 14:49:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:11.703 true 00:12:11.703 14:49:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:11.703 14:49:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.635 14:49:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.905 [2024-04-26 14:49:12.849102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.849994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.850990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.851986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.852942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.905 [2024-04-26 14:49:12.853502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.853586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.853653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.853877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.853978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.854945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.855971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.856986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.857978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.858986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.859998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.906 [2024-04-26 14:49:12.860687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.860752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.860819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.861946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.862969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.863981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.864976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.865965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.866985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 14:49:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:12.907 [2024-04-26 14:49:12.867686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 [2024-04-26 14:49:12.867753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.907 14:49:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:12.908 [2024-04-26 14:49:12.867815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.867875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.868992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.869950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.870999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.871988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.872995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.873998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.908 [2024-04-26 14:49:12.874808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.874873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.874935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.875955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.876977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.877993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.878956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.879932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.909 [2024-04-26 14:49:12.880650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.880714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.880775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.880838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.880898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.880957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.881988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.882937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.883995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.884960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.885962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.886029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.886089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.911 [2024-04-26 14:49:12.886189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.886996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.887939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.888967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.889929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.890949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.891990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.892962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.912 [2024-04-26 14:49:12.893491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.893945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.894957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.895967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.896951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.897950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.898982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.899989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.913 [2024-04-26 14:49:12.900641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.900703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.900763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.900823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.900886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.900963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.901959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.902956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.903992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.904952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.905992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.906976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.914 [2024-04-26 14:49:12.907682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.907744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.907811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.907871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.907931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.907992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.908964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.909948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.910970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.911977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.912942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.915 [2024-04-26 14:49:12.913607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.913988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.914870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.915968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.916942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:12.916 [2024-04-26 14:49:12.917365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.917968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.918960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.919964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.916 [2024-04-26 14:49:12.920590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.920942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.921938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.922943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.923978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.924990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.925982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.926966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.917 [2024-04-26 14:49:12.927661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.927725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.927786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.927846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.927910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.927977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.928956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.929982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.930951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.931985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.932998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.933990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.918 [2024-04-26 14:49:12.934804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.934864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.934932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.934997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.935956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.936978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.937995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.938945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.939992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.940984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.919 [2024-04-26 14:49:12.941858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.941920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.941991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.942965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.943975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.944972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.945957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.946948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.920 [2024-04-26 14:49:12.947890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.947957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.948953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.949993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.950980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.951989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.952998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.953950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.921 [2024-04-26 14:49:12.954989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.955973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.956974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.957953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.958941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.959989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.960988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.922 [2024-04-26 14:49:12.961940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.962979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.963952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.964959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.965891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.966971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.967941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.923 [2024-04-26 14:49:12.968881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.968938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.968998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.969968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.970964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.971967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.972775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.973973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.924 [2024-04-26 14:49:12.974544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:12.925 [2024-04-26 14:49:12.974607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.974671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.974743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.974825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.974888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.974949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.975964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.976970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.207 [2024-04-26 14:49:12.977769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.977852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.977915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.977976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.978994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.979905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.980955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.981969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.982933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.983967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.208 [2024-04-26 14:49:12.984636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.984846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.984929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.984994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.985942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.986924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.987948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.988941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.209 [2024-04-26 14:49:12.989577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.989949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.990991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.209 [2024-04-26 14:49:12.991755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.991834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.991893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.991954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.992990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.993994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.994994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.995985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.996960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.997992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.210 [2024-04-26 14:49:12.998767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.998834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.998898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.998961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:12.999942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.000958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.001970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.002955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.003997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.004997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.211 [2024-04-26 14:49:13.005736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.005800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.005864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.005927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.005988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.006980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.007950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.008971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.009971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.010934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.212 [2024-04-26 14:49:13.011551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.011935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.012969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.013940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.014943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.015976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.016909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.017999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.213 [2024-04-26 14:49:13.018560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.018940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.019993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.020981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.021997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.022977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.023836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.024979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.214 [2024-04-26 14:49:13.025618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.025692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.025755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.025819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.025879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.025940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.026960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.027993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.028981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.029959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.030991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.031968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.215 [2024-04-26 14:49:13.032584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.032955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.033995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.034952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.035992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.036964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.037947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.216 [2024-04-26 14:49:13.038666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.038731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.038799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.038862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.038922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.038988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.039884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.040982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.041967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.042991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.043990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.044955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.217 [2024-04-26 14:49:13.045679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.045739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.045800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.045866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.045926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.045987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.046998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.047992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.048996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.049974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.050950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.051999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.218 [2024-04-26 14:49:13.052669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.052728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.052790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.052857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.052916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.052976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.053922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.054967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.055968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.056981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.057989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.058953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.219 [2024-04-26 14:49:13.059797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.059865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.059924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.059982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.060972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.061980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.062905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.220 [2024-04-26 14:49:13.063164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.063978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.064985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.065955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.220 [2024-04-26 14:49:13.066868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.066934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.066996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.067961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.068991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.069875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.070993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.071948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.072959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.073030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.073093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.073194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.221 [2024-04-26 14:49:13.073260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.073951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.074947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.075980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.076960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.077948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.078942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.222 [2024-04-26 14:49:13.079600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.079994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.080940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.081952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.082982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.083993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.084938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.085971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.223 [2024-04-26 14:49:13.086855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.086919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.086981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.087980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.088820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.089935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.090998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.091934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.092963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.093024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.093086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 [2024-04-26 14:49:13.093188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.224 true 00:12:13.224 [2024-04-26 14:49:13.093254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.093969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.094953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.095999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.096942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.097985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.098944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.225 [2024-04-26 14:49:13.099914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.099980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.100893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.101942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.102932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.103976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.104960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.105804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.106042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.106144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.106223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.106290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.226 [2024-04-26 14:49:13.106359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.106999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.107945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 14:49:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:13.227 14:49:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.227 [2024-04-26 14:49:13.108559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.108972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.109930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.110900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.111938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.112972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.227 [2024-04-26 14:49:13.113861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.113928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.113997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.114946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.115956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.116963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.117945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.118953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.119987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.120067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.228 [2024-04-26 14:49:13.120157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.120989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.121981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.122957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.123969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.124968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.125983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.229 [2024-04-26 14:49:13.126748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.126810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.126871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.126934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.127987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.128992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.129984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.130989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.131995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.230 [2024-04-26 14:49:13.132750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.132816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.132877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.132938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.132999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.133991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.134960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.135963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.136976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.231 [2024-04-26 14:49:13.137292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.137964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.138914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.231 [2024-04-26 14:49:13.139000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.139983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.140990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.141826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.142941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.143959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.144993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.145991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.146052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.232 [2024-04-26 14:49:13.146346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.146930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.147958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.148971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.149958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.150966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.151976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.233 [2024-04-26 14:49:13.152803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.152865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.152924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.152986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.153955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.154978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.155980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.156975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.157975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.158983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.234 [2024-04-26 14:49:13.159980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.160804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.161997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.162975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.163969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.164978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.165037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.165118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.165206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.235 [2024-04-26 14:49:13.165272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.165973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.166962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.167995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.168985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.169963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.170989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.171993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.172055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.172138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.172204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.236 [2024-04-26 14:49:13.172274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.172940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.173972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.174930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.175974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.176939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.177985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.178965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.237 [2024-04-26 14:49:13.179638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.179721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.179782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.179841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.179902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.179968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.180950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.181989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.182939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.183993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.184957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.185960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.238 [2024-04-26 14:49:13.186917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.186976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.187932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.188920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.189952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.190990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.191963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.192995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.193979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.239 [2024-04-26 14:49:13.194040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.194985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.195864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.196992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.197991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.198978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.240 [2024-04-26 14:49:13.199921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.199981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.200978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.201949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.202797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.203949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.204967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.205943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.241 [2024-04-26 14:49:13.206924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.206984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.207964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.208977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.242 [2024-04-26 14:49:13.209917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.209997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.210937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.211977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.212972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.242 [2024-04-26 14:49:13.213909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.213968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.214981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.215975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.216955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.217943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.218896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.219964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.243 [2024-04-26 14:49:13.220700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.220762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.220826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.220886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.220947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.221960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.222952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.223969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.224939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.225789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.244 [2024-04-26 14:49:13.226657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.226722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.226781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.226843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.226902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.226961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.227983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.228957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.229948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.230962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.231946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.232979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.245 [2024-04-26 14:49:13.233385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.233952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.234922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.235961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.236985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.237952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.238936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.239946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.240009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.240069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.240155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.246 [2024-04-26 14:49:13.240217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.240986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.241802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.242945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.243963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.244972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.245955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.246945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.247008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.247 [2024-04-26 14:49:13.247070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.247944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.248956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.249994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.250928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.251969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.252936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.253930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.254001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.248 [2024-04-26 14:49:13.254064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.254984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.255998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.256964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.257901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.258949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.249 [2024-04-26 14:49:13.259762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.259832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.259895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.259961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.260968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.261958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.262942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.250 [2024-04-26 14:49:13.263840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.263905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.263970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.264989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.265961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.266963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.267932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.268934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.518 [2024-04-26 14:49:13.269732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.269793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.270998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.271951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.272932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.273981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.274976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.275940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.519 [2024-04-26 14:49:13.276686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.276746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.276808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.276874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.276934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.277972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.278968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.279973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.280938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.281903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.520 [2024-04-26 14:49:13.281993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.282943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.283009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.283082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.283163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.283233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.520 [2024-04-26 14:49:13.283298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.283977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.284994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.285964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.286988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.287953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.288916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.289948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.521 [2024-04-26 14:49:13.290523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.290982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.291984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.292959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.293949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.294987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.295992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.296059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.296311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.296397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.296486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.522 [2024-04-26 14:49:13.296550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.296991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.297955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.298957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.299960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.300987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.301958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.302949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.303010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.303075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.303324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.523 [2024-04-26 14:49:13.303428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.303994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.304960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.305945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.306991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.307970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.308989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.309942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.524 [2024-04-26 14:49:13.310555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.310988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.311964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.312998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.313953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.314925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.315939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.316945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.525 [2024-04-26 14:49:13.317779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.317840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.317901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.317961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.318973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.319942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.320971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.321938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.322958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.323938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.324958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.526 [2024-04-26 14:49:13.325019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.325952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.326960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.327994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.328959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.329999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.330801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.331011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.527 [2024-04-26 14:49:13.331096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.331990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.332939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.333940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.334993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.335969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.336950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 [2024-04-26 14:49:13.337872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 14:49:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.529 [2024-04-26 14:49:13.571437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.571932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.572980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.573969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.574938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.575849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.576948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.577970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.529 [2024-04-26 14:49:13.578780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.578841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.578905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.578968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.579991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.580961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.581996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.582959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.583963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.584982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.585987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.586051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.586122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.530 [2024-04-26 14:49:13.586195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.586992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.587828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.588966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.531 [2024-04-26 14:49:13.589375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.589965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.590981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 14:49:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:13.794 [2024-04-26 14:49:13.591112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 14:49:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:13.794 [2024-04-26 14:49:13.591579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.591950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.592976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.593043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.593132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.593222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.794 [2024-04-26 14:49:13.593293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.595982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.596973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.597942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.598938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.599857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.795 [2024-04-26 14:49:13.600619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.600990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.601944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.602937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.603970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.604974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.605946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.606989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.796 [2024-04-26 14:49:13.607808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.607869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.607933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.607994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.608989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.609988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.610991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.611963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.612906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.613999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:13.797 [2024-04-26 14:49:13.614103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.614966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.797 [2024-04-26 14:49:13.615025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.615958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.616964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.617942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.618773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.619960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.620995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.621057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.621163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.798 [2024-04-26 14:49:13.621349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.799 [2024-04-26 14:49:13.621416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.799 [2024-04-26 14:49:13.621512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.799 [2024-04-26 14:49:13.621575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.799 [2024-04-26 14:49:13.621638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.799 [2024-04-26 14:49:13.621700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.621758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.621819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.621879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.621946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.622943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.623988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.624950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.625932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.626951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.627965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.800 [2024-04-26 14:49:13.628912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.628976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.629956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.630954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.631993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.632820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.633994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.634973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.635963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.636029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.636092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.801 [2024-04-26 14:49:13.636182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.636995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.637965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.638942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.639987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.640997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.641937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.642972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.802 [2024-04-26 14:49:13.643624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.643693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.643755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.643816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.643877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.643939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.644942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.645961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.646997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.647059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.647156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.647223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 [2024-04-26 14:49:13.647506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:13.803 true 00:12:13.803 14:49:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:13.803 14:49:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.734 14:49:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.991 14:49:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:14.991 14:49:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:15.249 true 00:12:15.249 14:49:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:15.249 14:49:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.506 14:49:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.763 14:49:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:15.763 14:49:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:16.020 true 00:12:16.020 14:49:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:16.020 14:49:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.277 14:49:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.535 14:49:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:16.535 14:49:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:16.792 true 00:12:16.792 14:49:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:16.792 14:49:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.724 14:49:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.981 14:49:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:17.981 14:49:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:18.239 true 00:12:18.239 14:49:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:18.239 14:49:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 14:49:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.171 14:49:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:19.171 14:49:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:19.428 true 00:12:19.428 14:49:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:19.428 14:49:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 14:49:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.619 14:49:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:20.619 14:49:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:20.877 true 00:12:20.877 14:49:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:20.877 14:49:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.442 14:49:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.957 14:49:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:21.957 14:49:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:21.957 true 00:12:22.214 14:49:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:22.214 14:49:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.146 14:49:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.403 14:49:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:23.403 14:49:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:23.403 true 00:12:23.403 14:49:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:23.403 14:49:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 14:49:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.593 14:49:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:24.593 14:49:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:24.850 true 00:12:24.850 14:49:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:24.850 14:49:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 14:49:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.787 14:49:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:25.787 14:49:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:26.046 true 00:12:26.046 14:49:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:26.046 14:49:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.986 14:49:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.244 14:49:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:27.244 14:49:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:27.503 true 00:12:27.503 14:49:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:27.503 14:49:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.762 14:49:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.022 14:49:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:28.022 14:49:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:28.022 true 00:12:28.282 14:49:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:28.282 14:49:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.847 14:49:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.112 [2024-04-26 14:49:29.119867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.119984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.120953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.121924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.122963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.123986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.124051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.124115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.124199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.112 [2024-04-26 14:49:29.124434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.124986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.125979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.126973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.127977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.128858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.129991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.113 [2024-04-26 14:49:29.130797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.130859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.130938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.131956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.132958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.133941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.134943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.135988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.136231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.114 [2024-04-26 14:49:29.136302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.136958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.137957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.138973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 14:49:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:29.115 [2024-04-26 14:49:29.139746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.139998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 14:49:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:29.115 [2024-04-26 14:49:29.140060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.140922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.141987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.115 [2024-04-26 14:49:29.142436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.142971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.143969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.144987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.145958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.146997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.147835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.148060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.148150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.148233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.116 [2024-04-26 14:49:29.148298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.148970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.149963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.150935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.151942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.117 [2024-04-26 14:49:29.152700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.152780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.152842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.152900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.152961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.153949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.154972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.155942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.156998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.157992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.118 [2024-04-26 14:49:29.158673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.158734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.158801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.158864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.158923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.158984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.159967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.160955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.161998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.162987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.163989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.119 [2024-04-26 14:49:29.164693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.164751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.164813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.164875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.164935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.164997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.165988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.166994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.167974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.168945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.169977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.120 [2024-04-26 14:49:29.170589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.170648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.170709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.170775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.170988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.171981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.172986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.173960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.174952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.175977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.121 [2024-04-26 14:49:29.176774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.176834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.176903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.176967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.177976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.178979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.179918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.180965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.122 [2024-04-26 14:49:29.181724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.181791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.181851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.181915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.181977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.123 [2024-04-26 14:49:29.182502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.182984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.183945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.184951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.185976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.186052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.186143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.123 [2024-04-26 14:49:29.186208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.124 [2024-04-26 14:49:29.186683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.186759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.186840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.186903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.186965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.187030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.187254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.187338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.187399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.406 [2024-04-26 14:49:29.187482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.187941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.188958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.189985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.190989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.191823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.192944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.193984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.407 [2024-04-26 14:49:29.194582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.194954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.195981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.196994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.197981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.198940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.199963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.200978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.408 [2024-04-26 14:49:29.201407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.201995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.202977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.203978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.204960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.205990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.206945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.207782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.409 [2024-04-26 14:49:29.208542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.208996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.209943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.210968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.211992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.212959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.213979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.214991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.410 [2024-04-26 14:49:29.215465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.215992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.216969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.217991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.218974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.219984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.220951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.221013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.411 [2024-04-26 14:49:29.221073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.221980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.222946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.223957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.224982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.225951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.226975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.227966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.412 [2024-04-26 14:49:29.228028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.228986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.229987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.230969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.231941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.232986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.233947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.234978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.413 [2024-04-26 14:49:29.235039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.235942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.236988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.237950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.238983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.239997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.240992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.414 [2024-04-26 14:49:29.241974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.242940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.243970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.244957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.245950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.246938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.247982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.415 [2024-04-26 14:49:29.248462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.248827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.249995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.250980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.251942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.252983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.416 [2024-04-26 14:49:29.253536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.253983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.254966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.416 [2024-04-26 14:49:29.255027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.255940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.256996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.257912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.258946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.259991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.260950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.417 [2024-04-26 14:49:29.261915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.261977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.262999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.263959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.264850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.265939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.266991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.267951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.418 [2024-04-26 14:49:29.268946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.269950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.270943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.271945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.272975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.273961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.274958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.419 [2024-04-26 14:49:29.275875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.275937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.276951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.277961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.278989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.279922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.280980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.281974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.420 [2024-04-26 14:49:29.282042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.282942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.283962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.284968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.285957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.286971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.287947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.288987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.289047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.289140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.289215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.421 [2024-04-26 14:49:29.289281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.289945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.290961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.291966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.292985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.293990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.294972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.295992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.296059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.296146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.296210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.296274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.422 [2024-04-26 14:49:29.296339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.296993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.297952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.298939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.299965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.300942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.301994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.302992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.303054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.303135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.303201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.423 [2024-04-26 14:49:29.303264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.303971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.304949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.305949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.306966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.307944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.308948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.309976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.310037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.310097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.310189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.424 [2024-04-26 14:49:29.310258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.310855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.311935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.312994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.313953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.314948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.315984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.425 [2024-04-26 14:49:29.316398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.316990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.317943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.318944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.319994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.320972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.321942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.322994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.426 [2024-04-26 14:49:29.323405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.323990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.427 [2024-04-26 14:49:29.324815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.324973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.325954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.326862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.327998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.328966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.329945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.330014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.330090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.330177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.330243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.427 [2024-04-26 14:49:29.330309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.330998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.331940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.332983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.333985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.334970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.335951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.336969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.337030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.337089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.337176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.428 [2024-04-26 14:49:29.337242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.337956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.338997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.339963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.340980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.341961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.342887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.343133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.343203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.343267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.429 [2024-04-26 14:49:29.343329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.343971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.344950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.345978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.346965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.347987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.348951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.349972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.350035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.350095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.350183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.350247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.430 [2024-04-26 14:49:29.350311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.350958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.351936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.352977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.353977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.354999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.355998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.356953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.357014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.357079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.357166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.357231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.431 [2024-04-26 14:49:29.357298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.357948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.358786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.359996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.360964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.361995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.362969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.363945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.364004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.364066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.432 [2024-04-26 14:49:29.364157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.364994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.365964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.366936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.367841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.368991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.369969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.370996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.371078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.371153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.433 [2024-04-26 14:49:29.371223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.371937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 true 00:12:29.434 [2024-04-26 14:49:29.372079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.372963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.373981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.374972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.375965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.376984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.434 [2024-04-26 14:49:29.377795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.377857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.377920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.377991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.378934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.379992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.380975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.381994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.382994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.383970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.435 [2024-04-26 14:49:29.384978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.385982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.386884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 14:49:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:29.436 14:49:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.436 [2024-04-26 14:49:29.387137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.387988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.388947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.389969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.390989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.391985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.436 [2024-04-26 14:49:29.392615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.392988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.393942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.394970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.395998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 Message suppressed 999 times: [2024-04-26 14:49:29.396686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 Read completed with error (sct=0, sc=15) 00:12:29.437 [2024-04-26 14:49:29.396751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.396936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.397970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.398877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.399980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.400041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.400104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.400191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.400256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.437 [2024-04-26 14:49:29.400326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.400968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.401977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.402946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.403949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.404982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.405823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.406962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.407021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.407081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.407173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.407257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.438 [2024-04-26 14:49:29.407321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.407996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.408949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.409949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.410998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.411968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.412952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.413990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.414931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.415004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.439 [2024-04-26 14:49:29.415073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.415951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.416953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.417936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.418955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.419940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.420974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.421980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.440 [2024-04-26 14:49:29.422887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.422949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.423985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.424952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.425991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.426938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.427936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.428965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.429990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.441 [2024-04-26 14:49:29.430801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.430877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.430937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.431939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.432973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.433988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.434958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.435929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.436948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.442 [2024-04-26 14:49:29.437853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.437915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.437981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.438997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.439982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.440996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.441987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.442923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.443982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.444954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.443 [2024-04-26 14:49:29.445975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.446941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.447975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.448986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.449998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.450963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.451755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.452983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.453989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.444 [2024-04-26 14:49:29.454680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.454741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.454802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.454862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.454921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.454985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.455959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.456996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.457996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.458992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.459942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.460989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.461967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.445 [2024-04-26 14:49:29.462804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.462889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.462952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.463954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.717 [2024-04-26 14:49:29.464823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.464892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.464955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.465892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.466945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.467959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.468938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.469991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.718 [2024-04-26 14:49:29.470843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.470991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.718 [2024-04-26 14:49:29.471984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.472977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.473975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.474961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.475972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.476946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.477954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.719 [2024-04-26 14:49:29.478938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.479897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.480966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.481948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.482971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.483957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.484987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.720 [2024-04-26 14:49:29.485799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.485859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.485921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.485985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.486992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.487963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.488942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.489972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.490943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.491973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.721 [2024-04-26 14:49:29.492630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.492997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.493946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.494995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.495957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.496942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.497978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.722 [2024-04-26 14:49:29.498037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.498993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.499963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.500987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.501954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.502998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.503975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.723 [2024-04-26 14:49:29.504774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.504833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.504893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.505969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.506959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.507982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.508957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.509994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.510961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.724 [2024-04-26 14:49:29.511564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.511624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.511684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.511903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.511988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.512978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.513980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.514948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.515991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.516959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.517949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.518010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.518069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.518159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.725 [2024-04-26 14:49:29.518224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.518988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.519967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.520836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.521991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.522966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.523995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.726 [2024-04-26 14:49:29.524558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.524985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.525956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.526982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.527963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.528981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.529998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.530942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.531002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.727 [2024-04-26 14:49:29.531062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.531947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.532997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.533979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.534989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.535972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.536934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.537966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.728 [2024-04-26 14:49:29.538027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.538951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.539974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.540956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.729 [2024-04-26 14:49:29.541714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.541979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.542959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.543955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.729 [2024-04-26 14:49:29.544839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.544905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.544965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.545996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.546966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.547943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.548986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.549970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.550941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.730 [2024-04-26 14:49:29.551811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.551872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.551937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.551998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.552856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.553992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.554958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.555999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.556989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.731 [2024-04-26 14:49:29.557630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.557710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.557772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.557834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.557893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.557953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.558999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.559957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.560957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.561939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.562979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.563945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.564007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.564071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.564161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.732 [2024-04-26 14:49:29.564228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.564965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.565938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.566962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.567997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.568959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.569983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.570977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.571227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.733 [2024-04-26 14:49:29.571297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.571965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.572963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.573996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.574990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.575944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.576937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.577957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.578039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.734 [2024-04-26 14:49:29.578276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.578981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.579965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.580956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.581984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.582946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.583990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.584949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.585012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.585092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.585177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.735 [2024-04-26 14:49:29.585247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.585940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.586951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.587998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.588973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.589954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.736 [2024-04-26 14:49:29.590890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.590953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.591983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.592965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.593947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.594988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.595982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.596992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.737 [2024-04-26 14:49:29.597594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.597966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.598961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.599986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.600982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.601963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.602943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.603978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.604037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.604100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.604186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.738 [2024-04-26 14:49:29.604253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.604982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.605991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.606959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.607989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.608967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.609816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.739 [2024-04-26 14:49:29.610948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 [2024-04-26 14:49:29.611994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:12:29.740 14:49:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.004 [2024-04-26 14:49:29.853772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.853867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.853950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.004 [2024-04-26 14:49:29.854568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.854994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.855989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.856968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.857998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.858983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.859994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.860816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.005 [2024-04-26 14:49:29.861710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.861778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.861843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.861906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.861968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.862921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.863998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.864984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.865966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.866959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.867968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.006 [2024-04-26 14:49:29.868888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.868956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.869964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.870981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.871940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.872957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 14:49:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:30.007 [2024-04-26 14:49:29.873188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 14:49:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:30.007 [2024-04-26 14:49:29.873325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.873931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.874967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.875935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.876001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.876067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.007 [2024-04-26 14:49:29.876159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.876965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.877975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.878936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.879855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.880943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.881978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.882966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.883030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.883092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.883182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.883248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.008 [2024-04-26 14:49:29.883315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.883936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.884814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.885989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.886865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.887944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.888975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.889045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.889119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.889213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.009 [2024-04-26 14:49:29.889281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.889941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.890990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.891931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.892985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.893968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.894941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.895964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.010 [2024-04-26 14:49:29.896896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.896978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.897954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.898943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.899941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.900947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.901958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.902980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.903989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.904051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.904272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.011 [2024-04-26 14:49:29.904345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.904942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.905934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.906988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.907989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.908943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.909940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.910985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.012 [2024-04-26 14:49:29.911564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.911941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.912958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.913976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.914998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.915828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.916977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.013 [2024-04-26 14:49:29.917600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.917663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.917734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.917808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.917900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.917994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.918978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.919960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.014 [2024-04-26 14:49:29.920757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:12:30.272 true 00:12:30.272 14:49:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:30.272 14:49:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.213 14:49:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.213 14:49:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:31.213 14:49:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:31.472 true 00:12:31.472 14:49:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:31.472 14:49:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.731 14:49:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.989 14:49:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:31.989 14:49:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:32.247 true 00:12:32.247 14:49:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:32.247 14:49:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.505 14:49:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.763 14:49:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:32.763 14:49:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:33.021 true 00:12:33.021 14:49:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:33.021 14:49:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.021 Initializing NVMe Controllers 00:12:33.021 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:33.021 Controller IO queue size 128, less than required. 00:12:33.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.021 Controller IO queue size 128, less than required. 00:12:33.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:33.021 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:33.021 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:33.021 Initialization complete. Launching workers. 00:12:33.021 ======================================================== 00:12:33.021 Latency(us) 00:12:33.021 Device Information : IOPS MiB/s Average min max 00:12:33.021 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5873.37 2.87 17895.93 1907.58 1181479.00 00:12:33.021 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19646.83 9.59 6514.93 3011.97 384631.82 00:12:33.021 ======================================================== 00:12:33.021 Total : 25520.20 12.46 9134.22 1907.58 1181479.00 00:12:33.021 00:12:33.280 14:49:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.539 14:49:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:33.539 14:49:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:33.539 true 00:12:33.800 14:49:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 170745 00:12:33.800 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (170745) - No such process 00:12:33.800 14:49:33 -- target/ns_hotplug_stress.sh@44 -- # wait 170745 00:12:33.800 14:49:33 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:33.800 14:49:33 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:33.800 14:49:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:33.800 14:49:33 -- nvmf/common.sh@117 -- # sync 00:12:33.800 14:49:33 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:33.800 14:49:33 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:33.800 14:49:33 -- nvmf/common.sh@120 -- # set +e 00:12:33.800 14:49:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.800 14:49:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:33.800 rmmod nvme_rdma 00:12:33.800 rmmod nvme_fabrics 00:12:33.800 14:49:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.800 14:49:33 -- nvmf/common.sh@124 -- # set -e 00:12:33.800 14:49:33 -- nvmf/common.sh@125 -- # return 0 00:12:33.800 14:49:33 -- nvmf/common.sh@478 -- # '[' -n 170311 ']' 00:12:33.800 14:49:33 -- nvmf/common.sh@479 -- # killprocess 170311 00:12:33.800 14:49:33 -- common/autotest_common.sh@936 -- # '[' -z 170311 ']' 00:12:33.800 14:49:33 -- common/autotest_common.sh@940 -- # kill -0 170311 00:12:33.800 14:49:33 -- common/autotest_common.sh@941 -- # uname 00:12:33.800 14:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.800 14:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 170311 00:12:33.800 14:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:33.800 14:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:33.800 14:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 170311' 00:12:33.800 killing process with pid 170311 00:12:33.800 14:49:33 -- common/autotest_common.sh@955 -- # kill 170311 00:12:33.800 14:49:33 -- common/autotest_common.sh@960 -- # wait 170311 00:12:34.059 [2024-04-26 14:49:34.128086] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:35.440 14:49:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:35.440 14:49:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:35.440 00:12:35.440 real 0m41.066s 00:12:35.440 user 2m46.799s 00:12:35.440 sys 0m5.975s 00:12:35.440 14:49:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.440 14:49:35 -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 ************************************ 00:12:35.440 END TEST nvmf_ns_hotplug_stress 00:12:35.440 ************************************ 00:12:35.440 14:49:35 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:35.440 14:49:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.440 14:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.440 14:49:35 -- common/autotest_common.sh@10 -- # set +x 00:12:35.698 ************************************ 00:12:35.698 START TEST nvmf_connect_stress 00:12:35.698 ************************************ 00:12:35.698 14:49:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:12:35.698 * Looking for test storage... 00:12:35.698 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:35.698 14:49:35 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.698 14:49:35 -- nvmf/common.sh@7 -- # uname -s 00:12:35.698 14:49:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.698 14:49:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.698 14:49:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.698 14:49:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.698 14:49:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.698 14:49:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.698 14:49:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.698 14:49:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.698 14:49:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.698 14:49:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.698 14:49:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:35.698 14:49:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:35.698 14:49:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.698 14:49:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.698 14:49:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.698 14:49:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.698 14:49:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:35.698 14:49:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.698 14:49:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.698 14:49:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.698 14:49:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.698 14:49:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.698 14:49:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.698 14:49:35 -- paths/export.sh@5 -- # export PATH 00:12:35.698 14:49:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.698 14:49:35 -- nvmf/common.sh@47 -- # : 0 00:12:35.698 14:49:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.698 14:49:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.698 14:49:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.699 14:49:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.699 14:49:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.699 14:49:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.699 14:49:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.699 14:49:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.699 14:49:35 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:35.699 14:49:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:35.699 14:49:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.699 14:49:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:35.699 14:49:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:35.699 14:49:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:35.699 14:49:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.699 14:49:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.699 14:49:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.699 14:49:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:35.699 14:49:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:35.699 14:49:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.699 14:49:35 -- common/autotest_common.sh@10 -- # set +x 00:12:37.605 14:49:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.605 14:49:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.605 14:49:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.605 14:49:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.605 14:49:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.605 14:49:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.605 14:49:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.605 14:49:37 -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.605 14:49:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.605 14:49:37 -- nvmf/common.sh@296 -- # e810=() 00:12:37.605 14:49:37 -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.605 14:49:37 -- nvmf/common.sh@297 -- # x722=() 00:12:37.605 14:49:37 -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.605 14:49:37 -- nvmf/common.sh@298 -- # mlx=() 00:12:37.605 14:49:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.605 14:49:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.605 14:49:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:37.605 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:37.605 14:49:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.605 14:49:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:37.605 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:37.605 14:49:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.605 14:49:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.605 14:49:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.605 14:49:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:37.605 Found net devices under 0000:09:00.0: mlx_0_0 00:12:37.605 14:49:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.605 14:49:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.605 14:49:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:37.605 Found net devices under 0000:09:00.1: mlx_0_1 00:12:37.605 14:49:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.605 14:49:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:37.605 14:49:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:37.605 14:49:37 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:37.605 14:49:37 -- nvmf/common.sh@58 -- # uname 00:12:37.605 14:49:37 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:37.605 14:49:37 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:37.605 14:49:37 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:37.605 14:49:37 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:37.605 14:49:37 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:37.605 14:49:37 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:37.605 14:49:37 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:37.605 14:49:37 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:37.605 14:49:37 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:37.605 14:49:37 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:37.605 14:49:37 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:37.605 14:49:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.605 14:49:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.605 14:49:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.605 14:49:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.605 14:49:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.605 14:49:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.605 14:49:37 -- nvmf/common.sh@105 -- # continue 2 00:12:37.605 14:49:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.605 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.605 14:49:37 -- nvmf/common.sh@105 -- # continue 2 00:12:37.605 14:49:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.605 14:49:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:37.605 14:49:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.605 14:49:37 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:37.605 14:49:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:37.605 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.605 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:12:37.605 altname enp9s0f0np0 00:12:37.605 inet 192.168.100.8/24 scope global mlx_0_0 00:12:37.605 valid_lft forever preferred_lft forever 00:12:37.605 14:49:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.605 14:49:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:37.605 14:49:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.605 14:49:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.605 14:49:37 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:37.605 14:49:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:37.605 14:49:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:37.605 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.605 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:12:37.606 altname enp9s0f1np1 00:12:37.606 inet 192.168.100.9/24 scope global mlx_0_1 00:12:37.606 valid_lft forever preferred_lft forever 00:12:37.606 14:49:37 -- nvmf/common.sh@411 -- # return 0 00:12:37.606 14:49:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:37.606 14:49:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:37.606 14:49:37 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:37.606 14:49:37 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:37.606 14:49:37 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:37.606 14:49:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.606 14:49:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.606 14:49:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.606 14:49:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.606 14:49:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.606 14:49:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.606 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.606 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.606 14:49:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.606 14:49:37 -- nvmf/common.sh@105 -- # continue 2 00:12:37.606 14:49:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.606 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.606 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.606 14:49:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.606 14:49:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.606 14:49:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.606 14:49:37 -- nvmf/common.sh@105 -- # continue 2 00:12:37.606 14:49:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.606 14:49:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:37.606 14:49:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.606 14:49:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.606 14:49:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:37.606 14:49:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.606 14:49:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.606 14:49:37 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:37.606 192.168.100.9' 00:12:37.606 14:49:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:37.606 192.168.100.9' 00:12:37.606 14:49:37 -- nvmf/common.sh@446 -- # head -n 1 00:12:37.606 14:49:37 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:37.606 14:49:37 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:37.606 192.168.100.9' 00:12:37.606 14:49:37 -- nvmf/common.sh@447 -- # tail -n +2 00:12:37.606 14:49:37 -- nvmf/common.sh@447 -- # head -n 1 00:12:37.606 14:49:37 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:37.606 14:49:37 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:37.606 14:49:37 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:37.606 14:49:37 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:37.606 14:49:37 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:37.606 14:49:37 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:37.606 14:49:37 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:37.606 14:49:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:37.606 14:49:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:37.606 14:49:37 -- common/autotest_common.sh@10 -- # set +x 00:12:37.606 14:49:37 -- nvmf/common.sh@470 -- # nvmfpid=176591 00:12:37.606 14:49:37 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:37.606 14:49:37 -- nvmf/common.sh@471 -- # waitforlisten 176591 00:12:37.606 14:49:37 -- common/autotest_common.sh@817 -- # '[' -z 176591 ']' 00:12:37.606 14:49:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.606 14:49:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.606 14:49:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.606 14:49:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.606 14:49:37 -- common/autotest_common.sh@10 -- # set +x 00:12:37.866 [2024-04-26 14:49:37.746300] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:37.866 [2024-04-26 14:49:37.746450] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.866 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.866 [2024-04-26 14:49:37.877069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.128 [2024-04-26 14:49:38.131858] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.128 [2024-04-26 14:49:38.131935] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.128 [2024-04-26 14:49:38.131960] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.128 [2024-04-26 14:49:38.131985] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.128 [2024-04-26 14:49:38.132005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.128 [2024-04-26 14:49:38.132167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.128 [2024-04-26 14:49:38.132229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.128 [2024-04-26 14:49:38.132234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.697 14:49:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.697 14:49:38 -- common/autotest_common.sh@850 -- # return 0 00:12:38.697 14:49:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.697 14:49:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.697 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.697 14:49:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.697 14:49:38 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:38.697 14:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.697 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.697 [2024-04-26 14:49:38.691558] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7f9281007940) succeed. 00:12:38.697 [2024-04-26 14:49:38.702251] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7f9280fc3940) succeed. 00:12:38.958 14:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.958 14:49:38 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.958 14:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.958 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 14:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.958 14:49:38 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:38.958 14:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.958 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 [2024-04-26 14:49:38.941620] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:38.958 14:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.958 14:49:38 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:38.958 14:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.958 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 NULL1 00:12:38.958 14:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.958 14:49:38 -- target/connect_stress.sh@21 -- # PERF_PID=176859 00:12:38.958 14:49:38 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:38.958 14:49:38 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:38.958 14:49:38 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.958 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.958 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:38.959 14:49:38 -- target/connect_stress.sh@28 -- # cat 00:12:38.959 14:49:38 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:38.959 14:49:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.959 14:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.959 14:49:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.531 14:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.531 14:49:39 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:39.531 14:49:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.531 14:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.531 14:49:39 -- common/autotest_common.sh@10 -- # set +x 00:12:39.790 14:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.790 14:49:39 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:39.790 14:49:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.790 14:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.790 14:49:39 -- common/autotest_common.sh@10 -- # set +x 00:12:40.362 14:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.362 14:49:40 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:40.362 14:49:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.362 14:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.362 14:49:40 -- common/autotest_common.sh@10 -- # set +x 00:12:40.623 14:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.623 14:49:40 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:40.623 14:49:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.623 14:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.623 14:49:40 -- common/autotest_common.sh@10 -- # set +x 00:12:41.194 14:49:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.194 14:49:41 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:41.194 14:49:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.194 14:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.194 14:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:41.455 14:49:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.455 14:49:41 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:41.455 14:49:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.455 14:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.455 14:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.026 14:49:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.026 14:49:41 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:42.026 14:49:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.026 14:49:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.026 14:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.287 14:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.287 14:49:42 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:42.287 14:49:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.287 14:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.287 14:49:42 -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 14:49:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.857 14:49:42 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:42.857 14:49:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.857 14:49:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.857 14:49:42 -- common/autotest_common.sh@10 -- # set +x 00:12:43.118 14:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.118 14:49:43 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:43.118 14:49:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.118 14:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.118 14:49:43 -- common/autotest_common.sh@10 -- # set +x 00:12:43.378 14:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.378 14:49:43 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:43.378 14:49:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.378 14:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.378 14:49:43 -- common/autotest_common.sh@10 -- # set +x 00:12:43.947 14:49:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.947 14:49:43 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:43.947 14:49:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.947 14:49:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.947 14:49:43 -- common/autotest_common.sh@10 -- # set +x 00:12:44.205 14:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.205 14:49:44 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:44.205 14:49:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.205 14:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.205 14:49:44 -- common/autotest_common.sh@10 -- # set +x 00:12:44.773 14:49:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.773 14:49:44 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:44.773 14:49:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.773 14:49:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.773 14:49:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.034 14:49:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.034 14:49:45 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:45.034 14:49:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.034 14:49:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.034 14:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:45.602 14:49:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.602 14:49:45 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:45.602 14:49:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.602 14:49:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.602 14:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:45.862 14:49:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.862 14:49:45 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:45.862 14:49:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.862 14:49:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.862 14:49:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.435 14:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.435 14:49:46 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:46.435 14:49:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.435 14:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.435 14:49:46 -- common/autotest_common.sh@10 -- # set +x 00:12:46.694 14:49:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.694 14:49:46 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:46.694 14:49:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.694 14:49:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.694 14:49:46 -- common/autotest_common.sh@10 -- # set +x 00:12:47.263 14:49:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.263 14:49:47 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:47.263 14:49:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.263 14:49:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.263 14:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:47.524 14:49:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.524 14:49:47 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:47.524 14:49:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.524 14:49:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.524 14:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:48.092 14:49:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.092 14:49:47 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:48.092 14:49:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.092 14:49:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.092 14:49:47 -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 14:49:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.350 14:49:48 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:48.350 14:49:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.351 14:49:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.351 14:49:48 -- common/autotest_common.sh@10 -- # set +x 00:12:48.611 14:49:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.611 14:49:48 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:48.611 14:49:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.611 14:49:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.611 14:49:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.185 14:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.185 14:49:49 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:49.185 14:49:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.185 14:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.185 14:49:49 -- common/autotest_common.sh@10 -- # set +x 00:12:49.185 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.444 14:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.444 14:49:49 -- target/connect_stress.sh@34 -- # kill -0 176859 00:12:49.444 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (176859) - No such process 00:12:49.444 14:49:49 -- target/connect_stress.sh@38 -- # wait 176859 00:12:49.445 14:49:49 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:49.445 14:49:49 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:49.445 14:49:49 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:49.445 14:49:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:49.445 14:49:49 -- nvmf/common.sh@117 -- # sync 00:12:49.445 14:49:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:49.445 14:49:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:49.445 14:49:49 -- nvmf/common.sh@120 -- # set +e 00:12:49.445 14:49:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.445 14:49:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:49.445 rmmod nvme_rdma 00:12:49.445 rmmod nvme_fabrics 00:12:49.706 14:49:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.706 14:49:49 -- nvmf/common.sh@124 -- # set -e 00:12:49.706 14:49:49 -- nvmf/common.sh@125 -- # return 0 00:12:49.706 14:49:49 -- nvmf/common.sh@478 -- # '[' -n 176591 ']' 00:12:49.706 14:49:49 -- nvmf/common.sh@479 -- # killprocess 176591 00:12:49.706 14:49:49 -- common/autotest_common.sh@936 -- # '[' -z 176591 ']' 00:12:49.706 14:49:49 -- common/autotest_common.sh@940 -- # kill -0 176591 00:12:49.706 14:49:49 -- common/autotest_common.sh@941 -- # uname 00:12:49.706 14:49:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.706 14:49:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 176591 00:12:49.706 14:49:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:49.706 14:49:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:49.706 14:49:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 176591' 00:12:49.706 killing process with pid 176591 00:12:49.706 14:49:49 -- common/autotest_common.sh@955 -- # kill 176591 00:12:49.706 14:49:49 -- common/autotest_common.sh@960 -- # wait 176591 00:12:49.968 [2024-04-26 14:49:49.979901] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:51.352 14:49:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:51.352 14:49:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:51.352 00:12:51.352 real 0m15.684s 00:12:51.352 user 0m43.430s 00:12:51.352 sys 0m5.225s 00:12:51.352 14:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.352 14:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:51.352 ************************************ 00:12:51.352 END TEST nvmf_connect_stress 00:12:51.352 ************************************ 00:12:51.352 14:49:51 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:51.352 14:49:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.352 14:49:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.353 14:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:51.353 ************************************ 00:12:51.353 START TEST nvmf_fused_ordering 00:12:51.353 ************************************ 00:12:51.353 14:49:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:51.353 * Looking for test storage... 00:12:51.353 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:51.353 14:49:51 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.353 14:49:51 -- nvmf/common.sh@7 -- # uname -s 00:12:51.353 14:49:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.353 14:49:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.353 14:49:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.353 14:49:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.353 14:49:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.353 14:49:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.353 14:49:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.353 14:49:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.353 14:49:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.353 14:49:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.353 14:49:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:51.353 14:49:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:51.353 14:49:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.353 14:49:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.353 14:49:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.353 14:49:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.353 14:49:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:51.353 14:49:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.353 14:49:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.353 14:49:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.353 14:49:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.353 14:49:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.353 14:49:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.353 14:49:51 -- paths/export.sh@5 -- # export PATH 00:12:51.353 14:49:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.353 14:49:51 -- nvmf/common.sh@47 -- # : 0 00:12:51.353 14:49:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.353 14:49:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.353 14:49:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.353 14:49:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.353 14:49:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.353 14:49:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.353 14:49:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.353 14:49:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.613 14:49:51 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:51.613 14:49:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:51.613 14:49:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.613 14:49:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:51.613 14:49:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:51.613 14:49:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:51.613 14:49:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.613 14:49:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.613 14:49:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.613 14:49:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:51.613 14:49:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:51.613 14:49:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.613 14:49:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 14:49:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:53.523 14:49:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.523 14:49:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.523 14:49:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.523 14:49:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.523 14:49:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.523 14:49:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@296 -- # e810=() 00:12:53.523 14:49:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.523 14:49:53 -- nvmf/common.sh@297 -- # x722=() 00:12:53.523 14:49:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.523 14:49:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:53.523 14:49:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.523 14:49:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.523 14:49:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:53.523 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:53.523 14:49:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.523 14:49:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:53.523 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:53.523 14:49:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.523 14:49:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.523 14:49:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.523 14:49:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:53.523 Found net devices under 0000:09:00.0: mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.523 14:49:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.523 14:49:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:53.523 Found net devices under 0000:09:00.1: mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.523 14:49:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:53.523 14:49:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:53.523 14:49:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:53.523 14:49:53 -- nvmf/common.sh@58 -- # uname 00:12:53.523 14:49:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:53.523 14:49:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:53.523 14:49:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:53.523 14:49:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:53.523 14:49:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:53.523 14:49:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:53.523 14:49:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:53.523 14:49:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:53.523 14:49:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:53.523 14:49:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:53.523 14:49:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:53.523 14:49:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:53.523 14:49:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.523 14:49:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@105 -- # continue 2 00:12:53.523 14:49:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@105 -- # continue 2 00:12:53.523 14:49:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:53.523 14:49:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.523 14:49:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:53.523 14:49:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:53.523 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.523 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:12:53.523 altname enp9s0f0np0 00:12:53.523 inet 192.168.100.8/24 scope global mlx_0_0 00:12:53.523 valid_lft forever preferred_lft forever 00:12:53.523 14:49:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:53.523 14:49:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.523 14:49:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:53.523 14:49:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:53.523 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.523 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:12:53.523 altname enp9s0f1np1 00:12:53.523 inet 192.168.100.9/24 scope global mlx_0_1 00:12:53.523 valid_lft forever preferred_lft forever 00:12:53.523 14:49:53 -- nvmf/common.sh@411 -- # return 0 00:12:53.523 14:49:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:53.523 14:49:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:53.523 14:49:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:53.523 14:49:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:53.523 14:49:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:53.523 14:49:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:53.523 14:49:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.523 14:49:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:53.523 14:49:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@105 -- # continue 2 00:12:53.523 14:49:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.523 14:49:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.523 14:49:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@105 -- # continue 2 00:12:53.523 14:49:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:53.523 14:49:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.523 14:49:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:53.523 14:49:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.523 14:49:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.523 14:49:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:53.523 192.168.100.9' 00:12:53.523 14:49:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:53.523 192.168.100.9' 00:12:53.523 14:49:53 -- nvmf/common.sh@446 -- # head -n 1 00:12:53.523 14:49:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:53.523 14:49:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:53.523 192.168.100.9' 00:12:53.523 14:49:53 -- nvmf/common.sh@447 -- # tail -n +2 00:12:53.523 14:49:53 -- nvmf/common.sh@447 -- # head -n 1 00:12:53.523 14:49:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:53.523 14:49:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:53.523 14:49:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:53.523 14:49:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:53.523 14:49:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:53.523 14:49:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:53.523 14:49:53 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:53.523 14:49:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:53.523 14:49:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:53.523 14:49:53 -- common/autotest_common.sh@10 -- # set +x 00:12:53.523 14:49:53 -- nvmf/common.sh@470 -- # nvmfpid=179893 00:12:53.523 14:49:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:53.523 14:49:53 -- nvmf/common.sh@471 -- # waitforlisten 179893 00:12:53.523 14:49:53 -- common/autotest_common.sh@817 -- # '[' -z 179893 ']' 00:12:53.523 14:49:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.523 14:49:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.523 14:49:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.523 14:49:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.523 14:49:53 -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 [2024-04-26 14:49:53.621158] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:53.784 [2024-04-26 14:49:53.621289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.784 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.784 [2024-04-26 14:49:53.744711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.042 [2024-04-26 14:49:53.991874] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.042 [2024-04-26 14:49:53.991950] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.042 [2024-04-26 14:49:53.991974] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.042 [2024-04-26 14:49:53.991997] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.042 [2024-04-26 14:49:53.992016] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.042 [2024-04-26 14:49:53.992074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.613 14:49:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.613 14:49:54 -- common/autotest_common.sh@850 -- # return 0 00:12:54.613 14:49:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:54.613 14:49:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:54.613 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.613 14:49:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.613 14:49:54 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:54.613 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.613 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.613 [2024-04-26 14:49:54.595713] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027940/0x7f9c0c4e0940) succeed. 00:12:54.613 [2024-04-26 14:49:54.608123] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027ac0/0x7f9c0c49c940) succeed. 00:12:54.874 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.874 14:49:54 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:54.874 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.874 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.874 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.874 14:49:54 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:54.874 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.874 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.874 [2024-04-26 14:49:54.721265] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:54.875 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.875 14:49:54 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:54.875 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.875 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.875 NULL1 00:12:54.875 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.875 14:49:54 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:54.875 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.875 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.875 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.875 14:49:54 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:54.875 14:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.875 14:49:54 -- common/autotest_common.sh@10 -- # set +x 00:12:54.875 14:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.875 14:49:54 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:54.875 [2024-04-26 14:49:54.794552] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:54.875 [2024-04-26 14:49:54.794653] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180121 ] 00:12:54.875 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.135 Attached to nqn.2016-06.io.spdk:cnode1 00:12:55.135 Namespace ID: 1 size: 1GB 00:12:55.135 fused_ordering(0) 00:12:55.135 fused_ordering(1) 00:12:55.135 fused_ordering(2) 00:12:55.135 fused_ordering(3) 00:12:55.135 fused_ordering(4) 00:12:55.135 fused_ordering(5) 00:12:55.135 fused_ordering(6) 00:12:55.135 fused_ordering(7) 00:12:55.135 fused_ordering(8) 00:12:55.135 fused_ordering(9) 00:12:55.135 fused_ordering(10) 00:12:55.135 fused_ordering(11) 00:12:55.135 fused_ordering(12) 00:12:55.135 fused_ordering(13) 00:12:55.135 fused_ordering(14) 00:12:55.135 fused_ordering(15) 00:12:55.135 fused_ordering(16) 00:12:55.135 fused_ordering(17) 00:12:55.135 fused_ordering(18) 00:12:55.135 fused_ordering(19) 00:12:55.135 fused_ordering(20) 00:12:55.135 fused_ordering(21) 00:12:55.135 fused_ordering(22) 00:12:55.135 fused_ordering(23) 00:12:55.135 fused_ordering(24) 00:12:55.135 fused_ordering(25) 00:12:55.135 fused_ordering(26) 00:12:55.135 fused_ordering(27) 00:12:55.135 fused_ordering(28) 00:12:55.135 fused_ordering(29) 00:12:55.135 fused_ordering(30) 00:12:55.135 fused_ordering(31) 00:12:55.135 fused_ordering(32) 00:12:55.135 fused_ordering(33) 00:12:55.135 fused_ordering(34) 00:12:55.135 fused_ordering(35) 00:12:55.135 fused_ordering(36) 00:12:55.135 fused_ordering(37) 00:12:55.135 fused_ordering(38) 00:12:55.135 fused_ordering(39) 00:12:55.135 fused_ordering(40) 00:12:55.135 fused_ordering(41) 00:12:55.135 fused_ordering(42) 00:12:55.135 fused_ordering(43) 00:12:55.135 fused_ordering(44) 00:12:55.135 fused_ordering(45) 00:12:55.135 fused_ordering(46) 00:12:55.135 fused_ordering(47) 00:12:55.135 fused_ordering(48) 00:12:55.135 fused_ordering(49) 00:12:55.135 fused_ordering(50) 00:12:55.135 fused_ordering(51) 00:12:55.135 fused_ordering(52) 00:12:55.135 fused_ordering(53) 00:12:55.135 fused_ordering(54) 00:12:55.135 fused_ordering(55) 00:12:55.135 fused_ordering(56) 00:12:55.135 fused_ordering(57) 00:12:55.135 fused_ordering(58) 00:12:55.135 fused_ordering(59) 00:12:55.135 fused_ordering(60) 00:12:55.135 fused_ordering(61) 00:12:55.135 fused_ordering(62) 00:12:55.135 fused_ordering(63) 00:12:55.135 fused_ordering(64) 00:12:55.135 fused_ordering(65) 00:12:55.135 fused_ordering(66) 00:12:55.135 fused_ordering(67) 00:12:55.135 fused_ordering(68) 00:12:55.135 fused_ordering(69) 00:12:55.135 fused_ordering(70) 00:12:55.135 fused_ordering(71) 00:12:55.135 fused_ordering(72) 00:12:55.135 fused_ordering(73) 00:12:55.135 fused_ordering(74) 00:12:55.135 fused_ordering(75) 00:12:55.135 fused_ordering(76) 00:12:55.135 fused_ordering(77) 00:12:55.135 fused_ordering(78) 00:12:55.135 fused_ordering(79) 00:12:55.135 fused_ordering(80) 00:12:55.135 fused_ordering(81) 00:12:55.135 fused_ordering(82) 00:12:55.135 fused_ordering(83) 00:12:55.135 fused_ordering(84) 00:12:55.135 fused_ordering(85) 00:12:55.136 fused_ordering(86) 00:12:55.136 fused_ordering(87) 00:12:55.136 fused_ordering(88) 00:12:55.136 fused_ordering(89) 00:12:55.136 fused_ordering(90) 00:12:55.136 fused_ordering(91) 00:12:55.136 fused_ordering(92) 00:12:55.136 fused_ordering(93) 00:12:55.136 fused_ordering(94) 00:12:55.136 fused_ordering(95) 00:12:55.136 fused_ordering(96) 00:12:55.136 fused_ordering(97) 00:12:55.136 fused_ordering(98) 00:12:55.136 fused_ordering(99) 00:12:55.136 fused_ordering(100) 00:12:55.136 fused_ordering(101) 00:12:55.136 fused_ordering(102) 00:12:55.136 fused_ordering(103) 00:12:55.136 fused_ordering(104) 00:12:55.136 fused_ordering(105) 00:12:55.136 fused_ordering(106) 00:12:55.136 fused_ordering(107) 00:12:55.136 fused_ordering(108) 00:12:55.136 fused_ordering(109) 00:12:55.136 fused_ordering(110) 00:12:55.136 fused_ordering(111) 00:12:55.136 fused_ordering(112) 00:12:55.136 fused_ordering(113) 00:12:55.136 fused_ordering(114) 00:12:55.136 fused_ordering(115) 00:12:55.136 fused_ordering(116) 00:12:55.136 fused_ordering(117) 00:12:55.136 fused_ordering(118) 00:12:55.136 fused_ordering(119) 00:12:55.136 fused_ordering(120) 00:12:55.136 fused_ordering(121) 00:12:55.136 fused_ordering(122) 00:12:55.136 fused_ordering(123) 00:12:55.136 fused_ordering(124) 00:12:55.136 fused_ordering(125) 00:12:55.136 fused_ordering(126) 00:12:55.136 fused_ordering(127) 00:12:55.136 fused_ordering(128) 00:12:55.136 fused_ordering(129) 00:12:55.136 fused_ordering(130) 00:12:55.136 fused_ordering(131) 00:12:55.136 fused_ordering(132) 00:12:55.136 fused_ordering(133) 00:12:55.136 fused_ordering(134) 00:12:55.136 fused_ordering(135) 00:12:55.136 fused_ordering(136) 00:12:55.136 fused_ordering(137) 00:12:55.136 fused_ordering(138) 00:12:55.136 fused_ordering(139) 00:12:55.136 fused_ordering(140) 00:12:55.136 fused_ordering(141) 00:12:55.136 fused_ordering(142) 00:12:55.136 fused_ordering(143) 00:12:55.136 fused_ordering(144) 00:12:55.136 fused_ordering(145) 00:12:55.136 fused_ordering(146) 00:12:55.136 fused_ordering(147) 00:12:55.136 fused_ordering(148) 00:12:55.136 fused_ordering(149) 00:12:55.136 fused_ordering(150) 00:12:55.136 fused_ordering(151) 00:12:55.136 fused_ordering(152) 00:12:55.136 fused_ordering(153) 00:12:55.136 fused_ordering(154) 00:12:55.136 fused_ordering(155) 00:12:55.136 fused_ordering(156) 00:12:55.136 fused_ordering(157) 00:12:55.136 fused_ordering(158) 00:12:55.136 fused_ordering(159) 00:12:55.136 fused_ordering(160) 00:12:55.136 fused_ordering(161) 00:12:55.136 fused_ordering(162) 00:12:55.136 fused_ordering(163) 00:12:55.136 fused_ordering(164) 00:12:55.136 fused_ordering(165) 00:12:55.136 fused_ordering(166) 00:12:55.136 fused_ordering(167) 00:12:55.136 fused_ordering(168) 00:12:55.136 fused_ordering(169) 00:12:55.136 fused_ordering(170) 00:12:55.136 fused_ordering(171) 00:12:55.136 fused_ordering(172) 00:12:55.136 fused_ordering(173) 00:12:55.136 fused_ordering(174) 00:12:55.136 fused_ordering(175) 00:12:55.136 fused_ordering(176) 00:12:55.136 fused_ordering(177) 00:12:55.136 fused_ordering(178) 00:12:55.136 fused_ordering(179) 00:12:55.136 fused_ordering(180) 00:12:55.136 fused_ordering(181) 00:12:55.136 fused_ordering(182) 00:12:55.136 fused_ordering(183) 00:12:55.136 fused_ordering(184) 00:12:55.136 fused_ordering(185) 00:12:55.136 fused_ordering(186) 00:12:55.136 fused_ordering(187) 00:12:55.136 fused_ordering(188) 00:12:55.136 fused_ordering(189) 00:12:55.136 fused_ordering(190) 00:12:55.136 fused_ordering(191) 00:12:55.136 fused_ordering(192) 00:12:55.136 fused_ordering(193) 00:12:55.136 fused_ordering(194) 00:12:55.136 fused_ordering(195) 00:12:55.136 fused_ordering(196) 00:12:55.136 fused_ordering(197) 00:12:55.136 fused_ordering(198) 00:12:55.136 fused_ordering(199) 00:12:55.136 fused_ordering(200) 00:12:55.136 fused_ordering(201) 00:12:55.136 fused_ordering(202) 00:12:55.136 fused_ordering(203) 00:12:55.136 fused_ordering(204) 00:12:55.136 fused_ordering(205) 00:12:55.399 fused_ordering(206) 00:12:55.399 fused_ordering(207) 00:12:55.399 fused_ordering(208) 00:12:55.399 fused_ordering(209) 00:12:55.399 fused_ordering(210) 00:12:55.399 fused_ordering(211) 00:12:55.399 fused_ordering(212) 00:12:55.399 fused_ordering(213) 00:12:55.399 fused_ordering(214) 00:12:55.399 fused_ordering(215) 00:12:55.399 fused_ordering(216) 00:12:55.399 fused_ordering(217) 00:12:55.399 fused_ordering(218) 00:12:55.399 fused_ordering(219) 00:12:55.399 fused_ordering(220) 00:12:55.399 fused_ordering(221) 00:12:55.399 fused_ordering(222) 00:12:55.399 fused_ordering(223) 00:12:55.399 fused_ordering(224) 00:12:55.399 fused_ordering(225) 00:12:55.399 fused_ordering(226) 00:12:55.399 fused_ordering(227) 00:12:55.399 fused_ordering(228) 00:12:55.399 fused_ordering(229) 00:12:55.399 fused_ordering(230) 00:12:55.399 fused_ordering(231) 00:12:55.399 fused_ordering(232) 00:12:55.399 fused_ordering(233) 00:12:55.399 fused_ordering(234) 00:12:55.399 fused_ordering(235) 00:12:55.399 fused_ordering(236) 00:12:55.399 fused_ordering(237) 00:12:55.399 fused_ordering(238) 00:12:55.399 fused_ordering(239) 00:12:55.399 fused_ordering(240) 00:12:55.399 fused_ordering(241) 00:12:55.399 fused_ordering(242) 00:12:55.399 fused_ordering(243) 00:12:55.399 fused_ordering(244) 00:12:55.399 fused_ordering(245) 00:12:55.399 fused_ordering(246) 00:12:55.399 fused_ordering(247) 00:12:55.399 fused_ordering(248) 00:12:55.399 fused_ordering(249) 00:12:55.399 fused_ordering(250) 00:12:55.399 fused_ordering(251) 00:12:55.399 fused_ordering(252) 00:12:55.399 fused_ordering(253) 00:12:55.399 fused_ordering(254) 00:12:55.399 fused_ordering(255) 00:12:55.399 fused_ordering(256) 00:12:55.399 fused_ordering(257) 00:12:55.399 fused_ordering(258) 00:12:55.399 fused_ordering(259) 00:12:55.399 fused_ordering(260) 00:12:55.399 fused_ordering(261) 00:12:55.399 fused_ordering(262) 00:12:55.399 fused_ordering(263) 00:12:55.399 fused_ordering(264) 00:12:55.399 fused_ordering(265) 00:12:55.399 fused_ordering(266) 00:12:55.399 fused_ordering(267) 00:12:55.399 fused_ordering(268) 00:12:55.399 fused_ordering(269) 00:12:55.399 fused_ordering(270) 00:12:55.399 fused_ordering(271) 00:12:55.399 fused_ordering(272) 00:12:55.399 fused_ordering(273) 00:12:55.399 fused_ordering(274) 00:12:55.399 fused_ordering(275) 00:12:55.399 fused_ordering(276) 00:12:55.399 fused_ordering(277) 00:12:55.399 fused_ordering(278) 00:12:55.399 fused_ordering(279) 00:12:55.399 fused_ordering(280) 00:12:55.399 fused_ordering(281) 00:12:55.399 fused_ordering(282) 00:12:55.399 fused_ordering(283) 00:12:55.399 fused_ordering(284) 00:12:55.399 fused_ordering(285) 00:12:55.399 fused_ordering(286) 00:12:55.399 fused_ordering(287) 00:12:55.399 fused_ordering(288) 00:12:55.399 fused_ordering(289) 00:12:55.399 fused_ordering(290) 00:12:55.399 fused_ordering(291) 00:12:55.399 fused_ordering(292) 00:12:55.399 fused_ordering(293) 00:12:55.399 fused_ordering(294) 00:12:55.399 fused_ordering(295) 00:12:55.399 fused_ordering(296) 00:12:55.399 fused_ordering(297) 00:12:55.399 fused_ordering(298) 00:12:55.399 fused_ordering(299) 00:12:55.399 fused_ordering(300) 00:12:55.399 fused_ordering(301) 00:12:55.399 fused_ordering(302) 00:12:55.399 fused_ordering(303) 00:12:55.399 fused_ordering(304) 00:12:55.399 fused_ordering(305) 00:12:55.399 fused_ordering(306) 00:12:55.399 fused_ordering(307) 00:12:55.399 fused_ordering(308) 00:12:55.399 fused_ordering(309) 00:12:55.399 fused_ordering(310) 00:12:55.399 fused_ordering(311) 00:12:55.399 fused_ordering(312) 00:12:55.399 fused_ordering(313) 00:12:55.399 fused_ordering(314) 00:12:55.399 fused_ordering(315) 00:12:55.399 fused_ordering(316) 00:12:55.399 fused_ordering(317) 00:12:55.399 fused_ordering(318) 00:12:55.399 fused_ordering(319) 00:12:55.399 fused_ordering(320) 00:12:55.399 fused_ordering(321) 00:12:55.399 fused_ordering(322) 00:12:55.399 fused_ordering(323) 00:12:55.399 fused_ordering(324) 00:12:55.399 fused_ordering(325) 00:12:55.399 fused_ordering(326) 00:12:55.399 fused_ordering(327) 00:12:55.399 fused_ordering(328) 00:12:55.399 fused_ordering(329) 00:12:55.399 fused_ordering(330) 00:12:55.399 fused_ordering(331) 00:12:55.399 fused_ordering(332) 00:12:55.399 fused_ordering(333) 00:12:55.399 fused_ordering(334) 00:12:55.399 fused_ordering(335) 00:12:55.399 fused_ordering(336) 00:12:55.399 fused_ordering(337) 00:12:55.399 fused_ordering(338) 00:12:55.399 fused_ordering(339) 00:12:55.399 fused_ordering(340) 00:12:55.399 fused_ordering(341) 00:12:55.399 fused_ordering(342) 00:12:55.399 fused_ordering(343) 00:12:55.399 fused_ordering(344) 00:12:55.399 fused_ordering(345) 00:12:55.399 fused_ordering(346) 00:12:55.399 fused_ordering(347) 00:12:55.399 fused_ordering(348) 00:12:55.399 fused_ordering(349) 00:12:55.399 fused_ordering(350) 00:12:55.399 fused_ordering(351) 00:12:55.399 fused_ordering(352) 00:12:55.399 fused_ordering(353) 00:12:55.399 fused_ordering(354) 00:12:55.399 fused_ordering(355) 00:12:55.399 fused_ordering(356) 00:12:55.399 fused_ordering(357) 00:12:55.399 fused_ordering(358) 00:12:55.399 fused_ordering(359) 00:12:55.399 fused_ordering(360) 00:12:55.399 fused_ordering(361) 00:12:55.399 fused_ordering(362) 00:12:55.399 fused_ordering(363) 00:12:55.399 fused_ordering(364) 00:12:55.399 fused_ordering(365) 00:12:55.399 fused_ordering(366) 00:12:55.399 fused_ordering(367) 00:12:55.399 fused_ordering(368) 00:12:55.399 fused_ordering(369) 00:12:55.399 fused_ordering(370) 00:12:55.399 fused_ordering(371) 00:12:55.399 fused_ordering(372) 00:12:55.399 fused_ordering(373) 00:12:55.399 fused_ordering(374) 00:12:55.399 fused_ordering(375) 00:12:55.399 fused_ordering(376) 00:12:55.399 fused_ordering(377) 00:12:55.399 fused_ordering(378) 00:12:55.399 fused_ordering(379) 00:12:55.399 fused_ordering(380) 00:12:55.399 fused_ordering(381) 00:12:55.399 fused_ordering(382) 00:12:55.399 fused_ordering(383) 00:12:55.399 fused_ordering(384) 00:12:55.399 fused_ordering(385) 00:12:55.399 fused_ordering(386) 00:12:55.399 fused_ordering(387) 00:12:55.399 fused_ordering(388) 00:12:55.399 fused_ordering(389) 00:12:55.399 fused_ordering(390) 00:12:55.399 fused_ordering(391) 00:12:55.399 fused_ordering(392) 00:12:55.399 fused_ordering(393) 00:12:55.399 fused_ordering(394) 00:12:55.399 fused_ordering(395) 00:12:55.399 fused_ordering(396) 00:12:55.399 fused_ordering(397) 00:12:55.399 fused_ordering(398) 00:12:55.399 fused_ordering(399) 00:12:55.399 fused_ordering(400) 00:12:55.399 fused_ordering(401) 00:12:55.399 fused_ordering(402) 00:12:55.399 fused_ordering(403) 00:12:55.399 fused_ordering(404) 00:12:55.399 fused_ordering(405) 00:12:55.399 fused_ordering(406) 00:12:55.399 fused_ordering(407) 00:12:55.399 fused_ordering(408) 00:12:55.399 fused_ordering(409) 00:12:55.399 fused_ordering(410) 00:12:55.399 fused_ordering(411) 00:12:55.399 fused_ordering(412) 00:12:55.399 fused_ordering(413) 00:12:55.399 fused_ordering(414) 00:12:55.399 fused_ordering(415) 00:12:55.399 fused_ordering(416) 00:12:55.399 fused_ordering(417) 00:12:55.399 fused_ordering(418) 00:12:55.399 fused_ordering(419) 00:12:55.399 fused_ordering(420) 00:12:55.399 fused_ordering(421) 00:12:55.399 fused_ordering(422) 00:12:55.399 fused_ordering(423) 00:12:55.399 fused_ordering(424) 00:12:55.399 fused_ordering(425) 00:12:55.399 fused_ordering(426) 00:12:55.399 fused_ordering(427) 00:12:55.399 fused_ordering(428) 00:12:55.399 fused_ordering(429) 00:12:55.399 fused_ordering(430) 00:12:55.399 fused_ordering(431) 00:12:55.399 fused_ordering(432) 00:12:55.399 fused_ordering(433) 00:12:55.399 fused_ordering(434) 00:12:55.399 fused_ordering(435) 00:12:55.399 fused_ordering(436) 00:12:55.399 fused_ordering(437) 00:12:55.399 fused_ordering(438) 00:12:55.399 fused_ordering(439) 00:12:55.399 fused_ordering(440) 00:12:55.399 fused_ordering(441) 00:12:55.399 fused_ordering(442) 00:12:55.399 fused_ordering(443) 00:12:55.399 fused_ordering(444) 00:12:55.399 fused_ordering(445) 00:12:55.399 fused_ordering(446) 00:12:55.399 fused_ordering(447) 00:12:55.399 fused_ordering(448) 00:12:55.399 fused_ordering(449) 00:12:55.399 fused_ordering(450) 00:12:55.399 fused_ordering(451) 00:12:55.399 fused_ordering(452) 00:12:55.400 fused_ordering(453) 00:12:55.400 fused_ordering(454) 00:12:55.400 fused_ordering(455) 00:12:55.400 fused_ordering(456) 00:12:55.400 fused_ordering(457) 00:12:55.400 fused_ordering(458) 00:12:55.400 fused_ordering(459) 00:12:55.400 fused_ordering(460) 00:12:55.400 fused_ordering(461) 00:12:55.400 fused_ordering(462) 00:12:55.400 fused_ordering(463) 00:12:55.400 fused_ordering(464) 00:12:55.400 fused_ordering(465) 00:12:55.400 fused_ordering(466) 00:12:55.400 fused_ordering(467) 00:12:55.400 fused_ordering(468) 00:12:55.400 fused_ordering(469) 00:12:55.400 fused_ordering(470) 00:12:55.400 fused_ordering(471) 00:12:55.400 fused_ordering(472) 00:12:55.400 fused_ordering(473) 00:12:55.400 fused_ordering(474) 00:12:55.400 fused_ordering(475) 00:12:55.400 fused_ordering(476) 00:12:55.400 fused_ordering(477) 00:12:55.400 fused_ordering(478) 00:12:55.400 fused_ordering(479) 00:12:55.400 fused_ordering(480) 00:12:55.400 fused_ordering(481) 00:12:55.400 fused_ordering(482) 00:12:55.400 fused_ordering(483) 00:12:55.400 fused_ordering(484) 00:12:55.400 fused_ordering(485) 00:12:55.400 fused_ordering(486) 00:12:55.400 fused_ordering(487) 00:12:55.400 fused_ordering(488) 00:12:55.400 fused_ordering(489) 00:12:55.400 fused_ordering(490) 00:12:55.400 fused_ordering(491) 00:12:55.400 fused_ordering(492) 00:12:55.400 fused_ordering(493) 00:12:55.400 fused_ordering(494) 00:12:55.400 fused_ordering(495) 00:12:55.400 fused_ordering(496) 00:12:55.400 fused_ordering(497) 00:12:55.400 fused_ordering(498) 00:12:55.400 fused_ordering(499) 00:12:55.400 fused_ordering(500) 00:12:55.400 fused_ordering(501) 00:12:55.400 fused_ordering(502) 00:12:55.400 fused_ordering(503) 00:12:55.400 fused_ordering(504) 00:12:55.400 fused_ordering(505) 00:12:55.400 fused_ordering(506) 00:12:55.400 fused_ordering(507) 00:12:55.400 fused_ordering(508) 00:12:55.400 fused_ordering(509) 00:12:55.400 fused_ordering(510) 00:12:55.400 fused_ordering(511) 00:12:55.400 fused_ordering(512) 00:12:55.400 fused_ordering(513) 00:12:55.400 fused_ordering(514) 00:12:55.400 fused_ordering(515) 00:12:55.400 fused_ordering(516) 00:12:55.400 fused_ordering(517) 00:12:55.400 fused_ordering(518) 00:12:55.400 fused_ordering(519) 00:12:55.400 fused_ordering(520) 00:12:55.400 fused_ordering(521) 00:12:55.400 fused_ordering(522) 00:12:55.400 fused_ordering(523) 00:12:55.400 fused_ordering(524) 00:12:55.400 fused_ordering(525) 00:12:55.400 fused_ordering(526) 00:12:55.400 fused_ordering(527) 00:12:55.400 fused_ordering(528) 00:12:55.400 fused_ordering(529) 00:12:55.400 fused_ordering(530) 00:12:55.400 fused_ordering(531) 00:12:55.400 fused_ordering(532) 00:12:55.400 fused_ordering(533) 00:12:55.400 fused_ordering(534) 00:12:55.400 fused_ordering(535) 00:12:55.400 fused_ordering(536) 00:12:55.400 fused_ordering(537) 00:12:55.400 fused_ordering(538) 00:12:55.400 fused_ordering(539) 00:12:55.400 fused_ordering(540) 00:12:55.400 fused_ordering(541) 00:12:55.400 fused_ordering(542) 00:12:55.400 fused_ordering(543) 00:12:55.400 fused_ordering(544) 00:12:55.400 fused_ordering(545) 00:12:55.400 fused_ordering(546) 00:12:55.400 fused_ordering(547) 00:12:55.400 fused_ordering(548) 00:12:55.400 fused_ordering(549) 00:12:55.400 fused_ordering(550) 00:12:55.400 fused_ordering(551) 00:12:55.400 fused_ordering(552) 00:12:55.400 fused_ordering(553) 00:12:55.400 fused_ordering(554) 00:12:55.400 fused_ordering(555) 00:12:55.400 fused_ordering(556) 00:12:55.400 fused_ordering(557) 00:12:55.400 fused_ordering(558) 00:12:55.400 fused_ordering(559) 00:12:55.400 fused_ordering(560) 00:12:55.400 fused_ordering(561) 00:12:55.400 fused_ordering(562) 00:12:55.400 fused_ordering(563) 00:12:55.400 fused_ordering(564) 00:12:55.400 fused_ordering(565) 00:12:55.400 fused_ordering(566) 00:12:55.400 fused_ordering(567) 00:12:55.400 fused_ordering(568) 00:12:55.400 fused_ordering(569) 00:12:55.400 fused_ordering(570) 00:12:55.400 fused_ordering(571) 00:12:55.400 fused_ordering(572) 00:12:55.400 fused_ordering(573) 00:12:55.400 fused_ordering(574) 00:12:55.400 fused_ordering(575) 00:12:55.400 fused_ordering(576) 00:12:55.400 fused_ordering(577) 00:12:55.400 fused_ordering(578) 00:12:55.400 fused_ordering(579) 00:12:55.400 fused_ordering(580) 00:12:55.400 fused_ordering(581) 00:12:55.400 fused_ordering(582) 00:12:55.400 fused_ordering(583) 00:12:55.400 fused_ordering(584) 00:12:55.400 fused_ordering(585) 00:12:55.400 fused_ordering(586) 00:12:55.400 fused_ordering(587) 00:12:55.400 fused_ordering(588) 00:12:55.400 fused_ordering(589) 00:12:55.400 fused_ordering(590) 00:12:55.400 fused_ordering(591) 00:12:55.400 fused_ordering(592) 00:12:55.400 fused_ordering(593) 00:12:55.400 fused_ordering(594) 00:12:55.400 fused_ordering(595) 00:12:55.400 fused_ordering(596) 00:12:55.400 fused_ordering(597) 00:12:55.400 fused_ordering(598) 00:12:55.400 fused_ordering(599) 00:12:55.400 fused_ordering(600) 00:12:55.400 fused_ordering(601) 00:12:55.400 fused_ordering(602) 00:12:55.400 fused_ordering(603) 00:12:55.400 fused_ordering(604) 00:12:55.400 fused_ordering(605) 00:12:55.400 fused_ordering(606) 00:12:55.400 fused_ordering(607) 00:12:55.400 fused_ordering(608) 00:12:55.400 fused_ordering(609) 00:12:55.400 fused_ordering(610) 00:12:55.400 fused_ordering(611) 00:12:55.400 fused_ordering(612) 00:12:55.400 fused_ordering(613) 00:12:55.400 fused_ordering(614) 00:12:55.400 fused_ordering(615) 00:12:55.662 fused_ordering(616) 00:12:55.662 fused_ordering(617) 00:12:55.662 fused_ordering(618) 00:12:55.662 fused_ordering(619) 00:12:55.662 fused_ordering(620) 00:12:55.662 fused_ordering(621) 00:12:55.662 fused_ordering(622) 00:12:55.662 fused_ordering(623) 00:12:55.662 fused_ordering(624) 00:12:55.662 fused_ordering(625) 00:12:55.662 fused_ordering(626) 00:12:55.662 fused_ordering(627) 00:12:55.662 fused_ordering(628) 00:12:55.662 fused_ordering(629) 00:12:55.662 fused_ordering(630) 00:12:55.662 fused_ordering(631) 00:12:55.662 fused_ordering(632) 00:12:55.662 fused_ordering(633) 00:12:55.662 fused_ordering(634) 00:12:55.662 fused_ordering(635) 00:12:55.662 fused_ordering(636) 00:12:55.662 fused_ordering(637) 00:12:55.662 fused_ordering(638) 00:12:55.662 fused_ordering(639) 00:12:55.662 fused_ordering(640) 00:12:55.662 fused_ordering(641) 00:12:55.662 fused_ordering(642) 00:12:55.662 fused_ordering(643) 00:12:55.662 fused_ordering(644) 00:12:55.662 fused_ordering(645) 00:12:55.662 fused_ordering(646) 00:12:55.662 fused_ordering(647) 00:12:55.662 fused_ordering(648) 00:12:55.662 fused_ordering(649) 00:12:55.662 fused_ordering(650) 00:12:55.662 fused_ordering(651) 00:12:55.662 fused_ordering(652) 00:12:55.662 fused_ordering(653) 00:12:55.662 fused_ordering(654) 00:12:55.662 fused_ordering(655) 00:12:55.662 fused_ordering(656) 00:12:55.662 fused_ordering(657) 00:12:55.662 fused_ordering(658) 00:12:55.662 fused_ordering(659) 00:12:55.662 fused_ordering(660) 00:12:55.662 fused_ordering(661) 00:12:55.662 fused_ordering(662) 00:12:55.662 fused_ordering(663) 00:12:55.662 fused_ordering(664) 00:12:55.662 fused_ordering(665) 00:12:55.662 fused_ordering(666) 00:12:55.662 fused_ordering(667) 00:12:55.662 fused_ordering(668) 00:12:55.662 fused_ordering(669) 00:12:55.662 fused_ordering(670) 00:12:55.662 fused_ordering(671) 00:12:55.662 fused_ordering(672) 00:12:55.662 fused_ordering(673) 00:12:55.662 fused_ordering(674) 00:12:55.662 fused_ordering(675) 00:12:55.662 fused_ordering(676) 00:12:55.662 fused_ordering(677) 00:12:55.662 fused_ordering(678) 00:12:55.662 fused_ordering(679) 00:12:55.662 fused_ordering(680) 00:12:55.662 fused_ordering(681) 00:12:55.662 fused_ordering(682) 00:12:55.662 fused_ordering(683) 00:12:55.662 fused_ordering(684) 00:12:55.662 fused_ordering(685) 00:12:55.662 fused_ordering(686) 00:12:55.662 fused_ordering(687) 00:12:55.662 fused_ordering(688) 00:12:55.662 fused_ordering(689) 00:12:55.662 fused_ordering(690) 00:12:55.662 fused_ordering(691) 00:12:55.662 fused_ordering(692) 00:12:55.662 fused_ordering(693) 00:12:55.662 fused_ordering(694) 00:12:55.662 fused_ordering(695) 00:12:55.662 fused_ordering(696) 00:12:55.662 fused_ordering(697) 00:12:55.662 fused_ordering(698) 00:12:55.662 fused_ordering(699) 00:12:55.662 fused_ordering(700) 00:12:55.662 fused_ordering(701) 00:12:55.662 fused_ordering(702) 00:12:55.662 fused_ordering(703) 00:12:55.662 fused_ordering(704) 00:12:55.662 fused_ordering(705) 00:12:55.662 fused_ordering(706) 00:12:55.662 fused_ordering(707) 00:12:55.662 fused_ordering(708) 00:12:55.662 fused_ordering(709) 00:12:55.662 fused_ordering(710) 00:12:55.662 fused_ordering(711) 00:12:55.662 fused_ordering(712) 00:12:55.662 fused_ordering(713) 00:12:55.662 fused_ordering(714) 00:12:55.662 fused_ordering(715) 00:12:55.662 fused_ordering(716) 00:12:55.662 fused_ordering(717) 00:12:55.662 fused_ordering(718) 00:12:55.662 fused_ordering(719) 00:12:55.662 fused_ordering(720) 00:12:55.662 fused_ordering(721) 00:12:55.662 fused_ordering(722) 00:12:55.662 fused_ordering(723) 00:12:55.662 fused_ordering(724) 00:12:55.662 fused_ordering(725) 00:12:55.662 fused_ordering(726) 00:12:55.662 fused_ordering(727) 00:12:55.662 fused_ordering(728) 00:12:55.662 fused_ordering(729) 00:12:55.662 fused_ordering(730) 00:12:55.662 fused_ordering(731) 00:12:55.662 fused_ordering(732) 00:12:55.662 fused_ordering(733) 00:12:55.662 fused_ordering(734) 00:12:55.662 fused_ordering(735) 00:12:55.662 fused_ordering(736) 00:12:55.662 fused_ordering(737) 00:12:55.662 fused_ordering(738) 00:12:55.662 fused_ordering(739) 00:12:55.663 fused_ordering(740) 00:12:55.663 fused_ordering(741) 00:12:55.663 fused_ordering(742) 00:12:55.663 fused_ordering(743) 00:12:55.663 fused_ordering(744) 00:12:55.663 fused_ordering(745) 00:12:55.663 fused_ordering(746) 00:12:55.663 fused_ordering(747) 00:12:55.663 fused_ordering(748) 00:12:55.663 fused_ordering(749) 00:12:55.663 fused_ordering(750) 00:12:55.663 fused_ordering(751) 00:12:55.663 fused_ordering(752) 00:12:55.663 fused_ordering(753) 00:12:55.663 fused_ordering(754) 00:12:55.663 fused_ordering(755) 00:12:55.663 fused_ordering(756) 00:12:55.663 fused_ordering(757) 00:12:55.663 fused_ordering(758) 00:12:55.663 fused_ordering(759) 00:12:55.663 fused_ordering(760) 00:12:55.663 fused_ordering(761) 00:12:55.663 fused_ordering(762) 00:12:55.663 fused_ordering(763) 00:12:55.663 fused_ordering(764) 00:12:55.663 fused_ordering(765) 00:12:55.663 fused_ordering(766) 00:12:55.663 fused_ordering(767) 00:12:55.663 fused_ordering(768) 00:12:55.663 fused_ordering(769) 00:12:55.663 fused_ordering(770) 00:12:55.663 fused_ordering(771) 00:12:55.663 fused_ordering(772) 00:12:55.663 fused_ordering(773) 00:12:55.663 fused_ordering(774) 00:12:55.663 fused_ordering(775) 00:12:55.663 fused_ordering(776) 00:12:55.663 fused_ordering(777) 00:12:55.663 fused_ordering(778) 00:12:55.663 fused_ordering(779) 00:12:55.663 fused_ordering(780) 00:12:55.663 fused_ordering(781) 00:12:55.663 fused_ordering(782) 00:12:55.663 fused_ordering(783) 00:12:55.663 fused_ordering(784) 00:12:55.663 fused_ordering(785) 00:12:55.663 fused_ordering(786) 00:12:55.663 fused_ordering(787) 00:12:55.663 fused_ordering(788) 00:12:55.663 fused_ordering(789) 00:12:55.663 fused_ordering(790) 00:12:55.663 fused_ordering(791) 00:12:55.663 fused_ordering(792) 00:12:55.663 fused_ordering(793) 00:12:55.663 fused_ordering(794) 00:12:55.663 fused_ordering(795) 00:12:55.663 fused_ordering(796) 00:12:55.663 fused_ordering(797) 00:12:55.663 fused_ordering(798) 00:12:55.663 fused_ordering(799) 00:12:55.663 fused_ordering(800) 00:12:55.663 fused_ordering(801) 00:12:55.663 fused_ordering(802) 00:12:55.663 fused_ordering(803) 00:12:55.663 fused_ordering(804) 00:12:55.663 fused_ordering(805) 00:12:55.663 fused_ordering(806) 00:12:55.663 fused_ordering(807) 00:12:55.663 fused_ordering(808) 00:12:55.663 fused_ordering(809) 00:12:55.663 fused_ordering(810) 00:12:55.663 fused_ordering(811) 00:12:55.663 fused_ordering(812) 00:12:55.663 fused_ordering(813) 00:12:55.663 fused_ordering(814) 00:12:55.663 fused_ordering(815) 00:12:55.663 fused_ordering(816) 00:12:55.663 fused_ordering(817) 00:12:55.663 fused_ordering(818) 00:12:55.663 fused_ordering(819) 00:12:55.663 fused_ordering(820) 00:12:56.237 fused_ordering(821) 00:12:56.237 fused_ordering(822) 00:12:56.237 fused_ordering(823) 00:12:56.237 fused_ordering(824) 00:12:56.237 fused_ordering(825) 00:12:56.237 fused_ordering(826) 00:12:56.237 fused_ordering(827) 00:12:56.237 fused_ordering(828) 00:12:56.237 fused_ordering(829) 00:12:56.237 fused_ordering(830) 00:12:56.237 fused_ordering(831) 00:12:56.237 fused_ordering(832) 00:12:56.237 fused_ordering(833) 00:12:56.237 fused_ordering(834) 00:12:56.237 fused_ordering(835) 00:12:56.237 fused_ordering(836) 00:12:56.237 fused_ordering(837) 00:12:56.237 fused_ordering(838) 00:12:56.237 fused_ordering(839) 00:12:56.237 fused_ordering(840) 00:12:56.237 fused_ordering(841) 00:12:56.237 fused_ordering(842) 00:12:56.237 fused_ordering(843) 00:12:56.237 fused_ordering(844) 00:12:56.237 fused_ordering(845) 00:12:56.237 fused_ordering(846) 00:12:56.237 fused_ordering(847) 00:12:56.237 fused_ordering(848) 00:12:56.237 fused_ordering(849) 00:12:56.237 fused_ordering(850) 00:12:56.237 fused_ordering(851) 00:12:56.237 fused_ordering(852) 00:12:56.237 fused_ordering(853) 00:12:56.237 fused_ordering(854) 00:12:56.237 fused_ordering(855) 00:12:56.237 fused_ordering(856) 00:12:56.237 fused_ordering(857) 00:12:56.237 fused_ordering(858) 00:12:56.237 fused_ordering(859) 00:12:56.237 fused_ordering(860) 00:12:56.237 fused_ordering(861) 00:12:56.237 fused_ordering(862) 00:12:56.237 fused_ordering(863) 00:12:56.237 fused_ordering(864) 00:12:56.237 fused_ordering(865) 00:12:56.237 fused_ordering(866) 00:12:56.237 fused_ordering(867) 00:12:56.237 fused_ordering(868) 00:12:56.237 fused_ordering(869) 00:12:56.237 fused_ordering(870) 00:12:56.237 fused_ordering(871) 00:12:56.237 fused_ordering(872) 00:12:56.237 fused_ordering(873) 00:12:56.237 fused_ordering(874) 00:12:56.237 fused_ordering(875) 00:12:56.237 fused_ordering(876) 00:12:56.237 fused_ordering(877) 00:12:56.237 fused_ordering(878) 00:12:56.237 fused_ordering(879) 00:12:56.237 fused_ordering(880) 00:12:56.237 fused_ordering(881) 00:12:56.237 fused_ordering(882) 00:12:56.237 fused_ordering(883) 00:12:56.237 fused_ordering(884) 00:12:56.237 fused_ordering(885) 00:12:56.237 fused_ordering(886) 00:12:56.237 fused_ordering(887) 00:12:56.237 fused_ordering(888) 00:12:56.237 fused_ordering(889) 00:12:56.237 fused_ordering(890) 00:12:56.237 fused_ordering(891) 00:12:56.237 fused_ordering(892) 00:12:56.237 fused_ordering(893) 00:12:56.237 fused_ordering(894) 00:12:56.237 fused_ordering(895) 00:12:56.237 fused_ordering(896) 00:12:56.237 fused_ordering(897) 00:12:56.237 fused_ordering(898) 00:12:56.237 fused_ordering(899) 00:12:56.237 fused_ordering(900) 00:12:56.237 fused_ordering(901) 00:12:56.237 fused_ordering(902) 00:12:56.237 fused_ordering(903) 00:12:56.237 fused_ordering(904) 00:12:56.237 fused_ordering(905) 00:12:56.237 fused_ordering(906) 00:12:56.237 fused_ordering(907) 00:12:56.237 fused_ordering(908) 00:12:56.237 fused_ordering(909) 00:12:56.237 fused_ordering(910) 00:12:56.237 fused_ordering(911) 00:12:56.237 fused_ordering(912) 00:12:56.237 fused_ordering(913) 00:12:56.237 fused_ordering(914) 00:12:56.237 fused_ordering(915) 00:12:56.237 fused_ordering(916) 00:12:56.237 fused_ordering(917) 00:12:56.237 fused_ordering(918) 00:12:56.237 fused_ordering(919) 00:12:56.237 fused_ordering(920) 00:12:56.237 fused_ordering(921) 00:12:56.237 fused_ordering(922) 00:12:56.237 fused_ordering(923) 00:12:56.237 fused_ordering(924) 00:12:56.237 fused_ordering(925) 00:12:56.237 fused_ordering(926) 00:12:56.237 fused_ordering(927) 00:12:56.237 fused_ordering(928) 00:12:56.237 fused_ordering(929) 00:12:56.237 fused_ordering(930) 00:12:56.237 fused_ordering(931) 00:12:56.237 fused_ordering(932) 00:12:56.237 fused_ordering(933) 00:12:56.237 fused_ordering(934) 00:12:56.237 fused_ordering(935) 00:12:56.237 fused_ordering(936) 00:12:56.237 fused_ordering(937) 00:12:56.237 fused_ordering(938) 00:12:56.237 fused_ordering(939) 00:12:56.237 fused_ordering(940) 00:12:56.237 fused_ordering(941) 00:12:56.237 fused_ordering(942) 00:12:56.237 fused_ordering(943) 00:12:56.237 fused_ordering(944) 00:12:56.237 fused_ordering(945) 00:12:56.237 fused_ordering(946) 00:12:56.237 fused_ordering(947) 00:12:56.237 fused_ordering(948) 00:12:56.237 fused_ordering(949) 00:12:56.237 fused_ordering(950) 00:12:56.237 fused_ordering(951) 00:12:56.237 fused_ordering(952) 00:12:56.237 fused_ordering(953) 00:12:56.237 fused_ordering(954) 00:12:56.237 fused_ordering(955) 00:12:56.237 fused_ordering(956) 00:12:56.237 fused_ordering(957) 00:12:56.237 fused_ordering(958) 00:12:56.237 fused_ordering(959) 00:12:56.237 fused_ordering(960) 00:12:56.237 fused_ordering(961) 00:12:56.237 fused_ordering(962) 00:12:56.237 fused_ordering(963) 00:12:56.237 fused_ordering(964) 00:12:56.237 fused_ordering(965) 00:12:56.237 fused_ordering(966) 00:12:56.237 fused_ordering(967) 00:12:56.237 fused_ordering(968) 00:12:56.237 fused_ordering(969) 00:12:56.237 fused_ordering(970) 00:12:56.237 fused_ordering(971) 00:12:56.237 fused_ordering(972) 00:12:56.237 fused_ordering(973) 00:12:56.237 fused_ordering(974) 00:12:56.237 fused_ordering(975) 00:12:56.237 fused_ordering(976) 00:12:56.237 fused_ordering(977) 00:12:56.237 fused_ordering(978) 00:12:56.237 fused_ordering(979) 00:12:56.237 fused_ordering(980) 00:12:56.237 fused_ordering(981) 00:12:56.237 fused_ordering(982) 00:12:56.237 fused_ordering(983) 00:12:56.237 fused_ordering(984) 00:12:56.237 fused_ordering(985) 00:12:56.237 fused_ordering(986) 00:12:56.237 fused_ordering(987) 00:12:56.237 fused_ordering(988) 00:12:56.237 fused_ordering(989) 00:12:56.237 fused_ordering(990) 00:12:56.237 fused_ordering(991) 00:12:56.237 fused_ordering(992) 00:12:56.237 fused_ordering(993) 00:12:56.237 fused_ordering(994) 00:12:56.237 fused_ordering(995) 00:12:56.237 fused_ordering(996) 00:12:56.237 fused_ordering(997) 00:12:56.237 fused_ordering(998) 00:12:56.237 fused_ordering(999) 00:12:56.237 fused_ordering(1000) 00:12:56.237 fused_ordering(1001) 00:12:56.237 fused_ordering(1002) 00:12:56.237 fused_ordering(1003) 00:12:56.237 fused_ordering(1004) 00:12:56.237 fused_ordering(1005) 00:12:56.237 fused_ordering(1006) 00:12:56.237 fused_ordering(1007) 00:12:56.237 fused_ordering(1008) 00:12:56.237 fused_ordering(1009) 00:12:56.237 fused_ordering(1010) 00:12:56.237 fused_ordering(1011) 00:12:56.237 fused_ordering(1012) 00:12:56.237 fused_ordering(1013) 00:12:56.237 fused_ordering(1014) 00:12:56.237 fused_ordering(1015) 00:12:56.237 fused_ordering(1016) 00:12:56.237 fused_ordering(1017) 00:12:56.237 fused_ordering(1018) 00:12:56.237 fused_ordering(1019) 00:12:56.237 fused_ordering(1020) 00:12:56.237 fused_ordering(1021) 00:12:56.237 fused_ordering(1022) 00:12:56.237 fused_ordering(1023) 00:12:56.237 14:49:56 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:56.237 14:49:56 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:56.237 14:49:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:56.237 14:49:56 -- nvmf/common.sh@117 -- # sync 00:12:56.237 14:49:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:56.237 14:49:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:56.237 14:49:56 -- nvmf/common.sh@120 -- # set +e 00:12:56.237 14:49:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.237 14:49:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:56.237 rmmod nvme_rdma 00:12:56.237 rmmod nvme_fabrics 00:12:56.237 14:49:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.237 14:49:56 -- nvmf/common.sh@124 -- # set -e 00:12:56.237 14:49:56 -- nvmf/common.sh@125 -- # return 0 00:12:56.237 14:49:56 -- nvmf/common.sh@478 -- # '[' -n 179893 ']' 00:12:56.237 14:49:56 -- nvmf/common.sh@479 -- # killprocess 179893 00:12:56.237 14:49:56 -- common/autotest_common.sh@936 -- # '[' -z 179893 ']' 00:12:56.237 14:49:56 -- common/autotest_common.sh@940 -- # kill -0 179893 00:12:56.237 14:49:56 -- common/autotest_common.sh@941 -- # uname 00:12:56.237 14:49:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.237 14:49:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 179893 00:12:56.237 14:49:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:56.237 14:49:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:56.237 14:49:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 179893' 00:12:56.237 killing process with pid 179893 00:12:56.237 14:49:56 -- common/autotest_common.sh@955 -- # kill 179893 00:12:56.237 14:49:56 -- common/autotest_common.sh@960 -- # wait 179893 00:12:56.499 [2024-04-26 14:49:56.328172] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:57.878 14:49:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:57.878 14:49:57 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:57.878 00:12:57.878 real 0m6.263s 00:12:57.878 user 0m6.174s 00:12:57.878 sys 0m1.975s 00:12:57.878 14:49:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.878 14:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:57.878 ************************************ 00:12:57.878 END TEST nvmf_fused_ordering 00:12:57.878 ************************************ 00:12:57.878 14:49:57 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:57.878 14:49:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:57.878 14:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.878 14:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:57.878 ************************************ 00:12:57.878 START TEST nvmf_delete_subsystem 00:12:57.878 ************************************ 00:12:57.878 14:49:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:57.878 * Looking for test storage... 00:12:57.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:57.878 14:49:57 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.878 14:49:57 -- nvmf/common.sh@7 -- # uname -s 00:12:57.878 14:49:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.878 14:49:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.878 14:49:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.878 14:49:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.878 14:49:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.878 14:49:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.878 14:49:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.878 14:49:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.878 14:49:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.878 14:49:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.878 14:49:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:12:57.879 14:49:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:12:57.879 14:49:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.879 14:49:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.879 14:49:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.879 14:49:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.879 14:49:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:57.879 14:49:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.879 14:49:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.879 14:49:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.879 14:49:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.879 14:49:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.879 14:49:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.879 14:49:57 -- paths/export.sh@5 -- # export PATH 00:12:57.879 14:49:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.879 14:49:57 -- nvmf/common.sh@47 -- # : 0 00:12:57.879 14:49:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.879 14:49:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.879 14:49:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.879 14:49:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.879 14:49:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.879 14:49:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.879 14:49:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.879 14:49:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.879 14:49:57 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:57.879 14:49:57 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:57.879 14:49:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.879 14:49:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:57.879 14:49:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:57.879 14:49:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:57.879 14:49:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.879 14:49:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.879 14:49:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.879 14:49:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:57.879 14:49:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:57.879 14:49:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.879 14:49:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.792 14:49:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:59.792 14:49:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.792 14:49:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.792 14:49:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.792 14:49:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.792 14:49:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.792 14:49:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.792 14:49:59 -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.792 14:49:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.792 14:49:59 -- nvmf/common.sh@296 -- # e810=() 00:12:59.792 14:49:59 -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.792 14:49:59 -- nvmf/common.sh@297 -- # x722=() 00:12:59.792 14:49:59 -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.792 14:49:59 -- nvmf/common.sh@298 -- # mlx=() 00:12:59.792 14:49:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.792 14:49:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.792 14:49:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.792 14:49:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.792 14:49:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:59.792 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:59.792 14:49:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.792 14:49:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.792 14:49:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:59.792 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:59.792 14:49:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.792 14:49:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.792 14:49:59 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.792 14:49:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.792 14:49:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:59.792 14:49:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.792 14:49:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:59.792 Found net devices under 0000:09:00.0: mlx_0_0 00:12:59.792 14:49:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.792 14:49:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.792 14:49:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:59.792 14:49:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.792 14:49:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:59.792 Found net devices under 0000:09:00.1: mlx_0_1 00:12:59.792 14:49:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.792 14:49:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:59.792 14:49:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:59.792 14:49:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:59.792 14:49:59 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:59.792 14:49:59 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:59.792 14:49:59 -- nvmf/common.sh@58 -- # uname 00:12:59.792 14:49:59 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:59.792 14:49:59 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:59.792 14:49:59 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:59.792 14:49:59 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:59.792 14:49:59 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:59.792 14:49:59 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:59.792 14:49:59 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:59.792 14:49:59 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:59.792 14:49:59 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:59.792 14:49:59 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.793 14:49:59 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:59.793 14:49:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.793 14:49:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.793 14:49:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.793 14:49:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.793 14:49:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.793 14:49:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.793 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.793 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.793 14:49:59 -- nvmf/common.sh@105 -- # continue 2 00:12:59.793 14:49:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.793 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.793 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.793 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.793 14:49:59 -- nvmf/common.sh@105 -- # continue 2 00:12:59.793 14:49:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.793 14:49:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:59.793 14:49:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.793 14:49:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:59.793 14:49:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:59.793 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.793 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:12:59.793 altname enp9s0f0np0 00:12:59.793 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.793 valid_lft forever preferred_lft forever 00:12:59.793 14:49:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.793 14:49:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:59.793 14:49:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.793 14:49:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.793 14:49:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:59.793 14:49:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:59.793 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.793 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:12:59.793 altname enp9s0f1np1 00:12:59.793 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.793 valid_lft forever preferred_lft forever 00:12:59.793 14:49:59 -- nvmf/common.sh@411 -- # return 0 00:12:59.793 14:49:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:59.793 14:49:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.793 14:49:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:59.793 14:49:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:59.793 14:49:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:59.793 14:49:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.793 14:49:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.793 14:49:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.793 14:49:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:00.054 14:49:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:00.054 14:49:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.054 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.054 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:00.054 14:49:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:00.054 14:49:59 -- nvmf/common.sh@105 -- # continue 2 00:13:00.054 14:49:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.054 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.054 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:00.054 14:49:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.054 14:49:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:00.054 14:49:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:00.054 14:49:59 -- nvmf/common.sh@105 -- # continue 2 00:13:00.054 14:49:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.054 14:49:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:00.054 14:49:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.054 14:49:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.054 14:49:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:00.054 14:49:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.054 14:49:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.054 14:49:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:00.054 192.168.100.9' 00:13:00.054 14:49:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:00.054 192.168.100.9' 00:13:00.054 14:49:59 -- nvmf/common.sh@446 -- # head -n 1 00:13:00.054 14:49:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:00.054 14:49:59 -- nvmf/common.sh@447 -- # tail -n +2 00:13:00.054 14:49:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:00.054 192.168.100.9' 00:13:00.054 14:49:59 -- nvmf/common.sh@447 -- # head -n 1 00:13:00.054 14:49:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:00.054 14:49:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:00.054 14:49:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:00.054 14:49:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:00.054 14:49:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:00.054 14:49:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:00.054 14:49:59 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:00.054 14:49:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:00.054 14:49:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:00.054 14:49:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.054 14:49:59 -- nvmf/common.sh@470 -- # nvmfpid=182108 00:13:00.054 14:49:59 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:00.054 14:49:59 -- nvmf/common.sh@471 -- # waitforlisten 182108 00:13:00.054 14:49:59 -- common/autotest_common.sh@817 -- # '[' -z 182108 ']' 00:13:00.054 14:49:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.054 14:49:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.054 14:49:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.054 14:49:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.055 14:49:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.055 [2024-04-26 14:49:59.993058] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:00.055 [2024-04-26 14:49:59.993213] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.055 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.055 [2024-04-26 14:50:00.127889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:00.315 [2024-04-26 14:50:00.387631] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.315 [2024-04-26 14:50:00.387715] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.315 [2024-04-26 14:50:00.387741] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.315 [2024-04-26 14:50:00.387766] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.315 [2024-04-26 14:50:00.387786] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.315 [2024-04-26 14:50:00.387921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.315 [2024-04-26 14:50:00.387925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.253 14:50:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:01.253 14:50:00 -- common/autotest_common.sh@850 -- # return 0 00:13:01.253 14:50:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:01.253 14:50:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:01.253 14:50:00 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 14:50:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 [2024-04-26 14:50:01.051720] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027c40/0x7fb591b45940) succeed. 00:13:01.253 [2024-04-26 14:50:01.064005] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027dc0/0x7fb591b01940) succeed. 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 [2024-04-26 14:50:01.250964] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 NULL1 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 Delay0 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.253 14:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.253 14:50:01 -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 14:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@28 -- # perf_pid=182264 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:01.253 14:50:01 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:01.513 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.513 [2024-04-26 14:50:01.411551] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:03.422 14:50:03 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.422 14:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.422 14:50:03 -- common/autotest_common.sh@10 -- # set +x 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 NVMe io qpair process completion error 00:13:04.797 14:50:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.797 14:50:04 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:04.797 14:50:04 -- target/delete_subsystem.sh@35 -- # kill -0 182264 00:13:04.797 14:50:04 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:05.058 14:50:05 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:05.058 14:50:05 -- target/delete_subsystem.sh@35 -- # kill -0 182264 00:13:05.058 14:50:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Write completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.632 starting I/O failed: -6 00:13:05.632 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 starting I/O failed: -6 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Write completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.633 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 Write completed with error (sct=0, sc=8) 00:13:05.634 Read completed with error (sct=0, sc=8) 00:13:05.634 14:50:05 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:05.634 14:50:05 -- target/delete_subsystem.sh@35 -- # kill -0 182264 00:13:05.634 14:50:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:05.634 [2024-04-26 14:50:05.561494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:13:05.634 [2024-04-26 14:50:05.561604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:05.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:05.634 Initializing NVMe Controllers 00:13:05.634 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.634 Controller IO queue size 128, less than required. 00:13:05.634 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:05.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:05.634 Initialization complete. Launching workers. 00:13:05.634 ======================================================== 00:13:05.634 Latency(us) 00:13:05.634 Device Information : IOPS MiB/s Average min max 00:13:05.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.58 0.04 1592715.76 1000363.06 2970886.74 00:13:05.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.58 0.04 1594594.97 1001382.65 2971935.25 00:13:05.634 ======================================================== 00:13:05.634 Total : 161.15 0.08 1593655.36 1000363.06 2971935.25 00:13:05.634 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@35 -- # kill -0 182264 00:13:06.208 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (182264) - No such process 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@45 -- # NOT wait 182264 00:13:06.208 14:50:06 -- common/autotest_common.sh@638 -- # local es=0 00:13:06.208 14:50:06 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 182264 00:13:06.208 14:50:06 -- common/autotest_common.sh@626 -- # local arg=wait 00:13:06.208 14:50:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:06.208 14:50:06 -- common/autotest_common.sh@630 -- # type -t wait 00:13:06.208 14:50:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:06.208 14:50:06 -- common/autotest_common.sh@641 -- # wait 182264 00:13:06.208 14:50:06 -- common/autotest_common.sh@641 -- # es=1 00:13:06.208 14:50:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:06.208 14:50:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:06.208 14:50:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:06.208 14:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.208 14:50:06 -- common/autotest_common.sh@10 -- # set +x 00:13:06.208 14:50:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:06.208 14:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.208 14:50:06 -- common/autotest_common.sh@10 -- # set +x 00:13:06.208 [2024-04-26 14:50:06.042786] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:06.208 14:50:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.208 14:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.208 14:50:06 -- common/autotest_common.sh@10 -- # set +x 00:13:06.208 14:50:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@54 -- # perf_pid=182823 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:06.208 14:50:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.208 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.208 [2024-04-26 14:50:06.187096] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:06.779 14:50:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.779 14:50:06 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:06.779 14:50:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.037 14:50:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.037 14:50:07 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:07.037 14:50:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.609 14:50:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.609 14:50:07 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:07.609 14:50:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:08.178 14:50:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:08.178 14:50:08 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:08.178 14:50:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:08.749 14:50:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:08.749 14:50:08 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:08.749 14:50:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:09.015 14:50:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:09.015 14:50:09 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:09.015 14:50:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:09.586 14:50:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:09.586 14:50:09 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:09.586 14:50:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:10.151 14:50:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:10.152 14:50:10 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:10.152 14:50:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:10.722 14:50:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:10.722 14:50:10 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:10.722 14:50:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:11.290 14:50:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:11.290 14:50:11 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:11.290 14:50:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:11.548 14:50:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:11.548 14:50:11 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:11.548 14:50:11 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:12.120 14:50:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:12.120 14:50:12 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:12.120 14:50:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:12.690 14:50:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:12.690 14:50:12 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:12.690 14:50:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:13.258 14:50:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:13.258 14:50:13 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:13.258 14:50:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:13.517 Initializing NVMe Controllers 00:13:13.517 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.517 Controller IO queue size 128, less than required. 00:13:13.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:13.517 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:13.517 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:13.517 Initialization complete. Launching workers. 00:13:13.517 ======================================================== 00:13:13.517 Latency(us) 00:13:13.517 Device Information : IOPS MiB/s Average min max 00:13:13.518 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002070.46 1000115.04 1005488.67 00:13:13.518 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003526.70 1000185.54 1008828.92 00:13:13.518 ======================================================== 00:13:13.518 Total : 256.00 0.12 1002798.58 1000115.04 1008828.92 00:13:13.518 00:13:13.518 14:50:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:13.518 14:50:13 -- target/delete_subsystem.sh@57 -- # kill -0 182823 00:13:13.518 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (182823) - No such process 00:13:13.518 14:50:13 -- target/delete_subsystem.sh@67 -- # wait 182823 00:13:13.518 14:50:13 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:13.518 14:50:13 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:13.518 14:50:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:13.518 14:50:13 -- nvmf/common.sh@117 -- # sync 00:13:13.778 14:50:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:13.778 14:50:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:13.778 14:50:13 -- nvmf/common.sh@120 -- # set +e 00:13:13.778 14:50:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.778 14:50:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:13.778 rmmod nvme_rdma 00:13:13.778 rmmod nvme_fabrics 00:13:13.778 14:50:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.778 14:50:13 -- nvmf/common.sh@124 -- # set -e 00:13:13.778 14:50:13 -- nvmf/common.sh@125 -- # return 0 00:13:13.778 14:50:13 -- nvmf/common.sh@478 -- # '[' -n 182108 ']' 00:13:13.778 14:50:13 -- nvmf/common.sh@479 -- # killprocess 182108 00:13:13.778 14:50:13 -- common/autotest_common.sh@936 -- # '[' -z 182108 ']' 00:13:13.778 14:50:13 -- common/autotest_common.sh@940 -- # kill -0 182108 00:13:13.778 14:50:13 -- common/autotest_common.sh@941 -- # uname 00:13:13.778 14:50:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:13.778 14:50:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 182108 00:13:13.778 14:50:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:13.778 14:50:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:13.778 14:50:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 182108' 00:13:13.778 killing process with pid 182108 00:13:13.778 14:50:13 -- common/autotest_common.sh@955 -- # kill 182108 00:13:13.778 14:50:13 -- common/autotest_common.sh@960 -- # wait 182108 00:13:14.040 [2024-04-26 14:50:13.990653] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:15.446 14:50:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.446 14:50:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:15.446 00:13:15.446 real 0m17.487s 00:13:15.446 user 0m51.252s 00:13:15.446 sys 0m2.914s 00:13:15.446 14:50:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:15.446 14:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:15.446 ************************************ 00:13:15.446 END TEST nvmf_delete_subsystem 00:13:15.446 ************************************ 00:13:15.446 14:50:15 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:15.446 14:50:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:15.446 14:50:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.446 14:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:15.446 ************************************ 00:13:15.446 START TEST nvmf_ns_masking 00:13:15.446 ************************************ 00:13:15.446 14:50:15 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:15.446 * Looking for test storage... 00:13:15.446 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.446 14:50:15 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.446 14:50:15 -- nvmf/common.sh@7 -- # uname -s 00:13:15.446 14:50:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.446 14:50:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.446 14:50:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.446 14:50:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.446 14:50:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.446 14:50:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.446 14:50:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.446 14:50:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.446 14:50:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.446 14:50:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.446 14:50:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:15.446 14:50:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:15.446 14:50:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.446 14:50:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.446 14:50:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.446 14:50:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.446 14:50:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.446 14:50:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.446 14:50:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.446 14:50:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.446 14:50:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.446 14:50:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.446 14:50:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.446 14:50:15 -- paths/export.sh@5 -- # export PATH 00:13:15.446 14:50:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.446 14:50:15 -- nvmf/common.sh@47 -- # : 0 00:13:15.446 14:50:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.446 14:50:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.446 14:50:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.446 14:50:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.446 14:50:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.446 14:50:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.446 14:50:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.446 14:50:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.446 14:50:15 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.446 14:50:15 -- target/ns_masking.sh@11 -- # loops=5 00:13:15.446 14:50:15 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:15.446 14:50:15 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:15.446 14:50:15 -- target/ns_masking.sh@15 -- # uuidgen 00:13:15.446 14:50:15 -- target/ns_masking.sh@15 -- # HOSTID=a5e59467-8004-4de3-aa72-8308ade81e41 00:13:15.446 14:50:15 -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:15.446 14:50:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:15.446 14:50:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.446 14:50:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:15.446 14:50:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:15.446 14:50:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:15.446 14:50:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.446 14:50:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.446 14:50:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.446 14:50:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:15.446 14:50:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:15.446 14:50:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.446 14:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.988 14:50:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:17.988 14:50:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.988 14:50:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.988 14:50:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.988 14:50:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.988 14:50:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.988 14:50:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.988 14:50:17 -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.988 14:50:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.988 14:50:17 -- nvmf/common.sh@296 -- # e810=() 00:13:17.988 14:50:17 -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.988 14:50:17 -- nvmf/common.sh@297 -- # x722=() 00:13:17.988 14:50:17 -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.988 14:50:17 -- nvmf/common.sh@298 -- # mlx=() 00:13:17.988 14:50:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.988 14:50:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.988 14:50:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.988 14:50:17 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:17.988 14:50:17 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:17.988 14:50:17 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:17.988 14:50:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.988 14:50:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.988 14:50:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:13:17.988 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:13:17.988 14:50:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:17.988 14:50:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.988 14:50:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:13:17.988 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:13:17.988 14:50:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:17.988 14:50:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.988 14:50:17 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:17.988 14:50:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.988 14:50:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.988 14:50:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:17.988 14:50:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.988 14:50:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:13:17.988 Found net devices under 0000:09:00.0: mlx_0_0 00:13:17.988 14:50:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.989 14:50:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.989 14:50:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:17.989 14:50:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.989 14:50:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:13:17.989 Found net devices under 0000:09:00.1: mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.989 14:50:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:17.989 14:50:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:17.989 14:50:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:17.989 14:50:17 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:17.989 14:50:17 -- nvmf/common.sh@58 -- # uname 00:13:17.989 14:50:17 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:17.989 14:50:17 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:17.989 14:50:17 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:17.989 14:50:17 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:17.989 14:50:17 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:17.989 14:50:17 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:17.989 14:50:17 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:17.989 14:50:17 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:17.989 14:50:17 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:17.989 14:50:17 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:17.989 14:50:17 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:17.989 14:50:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:17.989 14:50:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:17.989 14:50:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:17.989 14:50:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:17.989 14:50:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:17.989 14:50:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@105 -- # continue 2 00:13:17.989 14:50:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@105 -- # continue 2 00:13:17.989 14:50:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:17.989 14:50:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.989 14:50:17 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:17.989 14:50:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:17.989 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:17.989 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:13:17.989 altname enp9s0f0np0 00:13:17.989 inet 192.168.100.8/24 scope global mlx_0_0 00:13:17.989 valid_lft forever preferred_lft forever 00:13:17.989 14:50:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:17.989 14:50:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.989 14:50:17 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:17.989 14:50:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:17.989 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:17.989 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:13:17.989 altname enp9s0f1np1 00:13:17.989 inet 192.168.100.9/24 scope global mlx_0_1 00:13:17.989 valid_lft forever preferred_lft forever 00:13:17.989 14:50:17 -- nvmf/common.sh@411 -- # return 0 00:13:17.989 14:50:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:17.989 14:50:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:17.989 14:50:17 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:17.989 14:50:17 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:17.989 14:50:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:17.989 14:50:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:17.989 14:50:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:17.989 14:50:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:17.989 14:50:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:17.989 14:50:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@105 -- # continue 2 00:13:17.989 14:50:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:17.989 14:50:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:17.989 14:50:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@105 -- # continue 2 00:13:17.989 14:50:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:17.989 14:50:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.989 14:50:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:17.989 14:50:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:17.989 14:50:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:17.989 14:50:17 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:17.989 192.168.100.9' 00:13:17.989 14:50:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:17.989 192.168.100.9' 00:13:17.989 14:50:17 -- nvmf/common.sh@446 -- # head -n 1 00:13:17.989 14:50:17 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:17.989 14:50:17 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:17.989 192.168.100.9' 00:13:17.989 14:50:17 -- nvmf/common.sh@447 -- # tail -n +2 00:13:17.989 14:50:17 -- nvmf/common.sh@447 -- # head -n 1 00:13:17.989 14:50:17 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:17.989 14:50:17 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:17.989 14:50:17 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:17.989 14:50:17 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:17.989 14:50:17 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:17.989 14:50:17 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:17.989 14:50:17 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:17.989 14:50:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:17.989 14:50:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:17.989 14:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:17.989 14:50:17 -- nvmf/common.sh@470 -- # nvmfpid=185666 00:13:17.989 14:50:17 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.989 14:50:17 -- nvmf/common.sh@471 -- # waitforlisten 185666 00:13:17.989 14:50:17 -- common/autotest_common.sh@817 -- # '[' -z 185666 ']' 00:13:17.989 14:50:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.989 14:50:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:17.989 14:50:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.989 14:50:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:17.989 14:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:17.989 [2024-04-26 14:50:17.648506] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:17.989 [2024-04-26 14:50:17.648639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.989 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.989 [2024-04-26 14:50:17.774504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.989 [2024-04-26 14:50:18.026910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.989 [2024-04-26 14:50:18.026988] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.989 [2024-04-26 14:50:18.027017] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.989 [2024-04-26 14:50:18.027040] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.989 [2024-04-26 14:50:18.027068] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.989 [2024-04-26 14:50:18.027219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.989 [2024-04-26 14:50:18.027285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.989 [2024-04-26 14:50:18.027386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.989 [2024-04-26 14:50:18.027392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.556 14:50:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:18.556 14:50:18 -- common/autotest_common.sh@850 -- # return 0 00:13:18.556 14:50:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:18.556 14:50:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:18.556 14:50:18 -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 14:50:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.556 14:50:18 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:18.815 [2024-04-26 14:50:18.826046] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7efd39dbd940) succeed. 00:13:18.815 [2024-04-26 14:50:18.837236] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7efd39d79940) succeed. 00:13:19.385 14:50:19 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:19.385 14:50:19 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:19.385 14:50:19 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:19.643 Malloc1 00:13:19.643 14:50:19 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:19.903 Malloc2 00:13:19.903 14:50:19 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.161 14:50:20 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:20.420 14:50:20 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:20.678 [2024-04-26 14:50:20.594336] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:20.678 14:50:20 -- target/ns_masking.sh@61 -- # connect 00:13:20.678 14:50:20 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5e59467-8004-4de3-aa72-8308ade81e41 -a 192.168.100.8 -s 4420 -i 4 00:13:21.619 14:50:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.619 14:50:21 -- common/autotest_common.sh@1184 -- # local i=0 00:13:21.619 14:50:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.619 14:50:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:21.619 14:50:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:23.547 14:50:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:23.547 14:50:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:23.547 14:50:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.806 14:50:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:23.806 14:50:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.806 14:50:23 -- common/autotest_common.sh@1194 -- # return 0 00:13:23.806 14:50:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:23.806 14:50:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:23.806 14:50:23 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:23.806 14:50:23 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:23.806 14:50:23 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:23.806 14:50:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:23.806 14:50:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:23.806 [ 0]:0x1 00:13:23.806 14:50:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:23.806 14:50:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:23.806 14:50:23 -- target/ns_masking.sh@40 -- # nguid=7d6c696dcd5c4f60bff62b520caeb77d 00:13:23.806 14:50:23 -- target/ns_masking.sh@41 -- # [[ 7d6c696dcd5c4f60bff62b520caeb77d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:23.806 14:50:23 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:24.065 14:50:23 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:24.065 14:50:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:24.065 14:50:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:24.065 [ 0]:0x1 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # nguid=7d6c696dcd5c4f60bff62b520caeb77d 00:13:24.065 14:50:24 -- target/ns_masking.sh@41 -- # [[ 7d6c696dcd5c4f60bff62b520caeb77d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.065 14:50:24 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:24.065 14:50:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:24.065 14:50:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:24.065 [ 1]:0x2 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:24.065 14:50:24 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:24.065 14:50:24 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:24.065 14:50:24 -- target/ns_masking.sh@69 -- # disconnect 00:13:24.065 14:50:24 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.003 14:50:24 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.262 14:50:25 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:25.523 14:50:25 -- target/ns_masking.sh@77 -- # connect 1 00:13:25.523 14:50:25 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5e59467-8004-4de3-aa72-8308ade81e41 -a 192.168.100.8 -s 4420 -i 4 00:13:26.467 14:50:26 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:26.467 14:50:26 -- common/autotest_common.sh@1184 -- # local i=0 00:13:26.467 14:50:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.467 14:50:26 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:13:26.467 14:50:26 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:13:26.467 14:50:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:28.374 14:50:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:28.374 14:50:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:28.374 14:50:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.374 14:50:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:28.374 14:50:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.374 14:50:28 -- common/autotest_common.sh@1194 -- # return 0 00:13:28.374 14:50:28 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:28.374 14:50:28 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:28.374 14:50:28 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:28.374 14:50:28 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:28.374 14:50:28 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:28.374 14:50:28 -- common/autotest_common.sh@638 -- # local es=0 00:13:28.374 14:50:28 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:28.374 14:50:28 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:28.374 14:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.374 14:50:28 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:28.374 14:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.374 14:50:28 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:28.374 14:50:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:28.374 14:50:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:28.374 14:50:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.374 14:50:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:28.632 14:50:28 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:28.632 14:50:28 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.632 14:50:28 -- common/autotest_common.sh@641 -- # es=1 00:13:28.632 14:50:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:28.633 14:50:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:28.633 14:50:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:28.633 14:50:28 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:28.633 14:50:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:28.633 14:50:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:28.633 [ 0]:0x2 00:13:28.633 14:50:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.633 14:50:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:28.633 14:50:28 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:28.633 14:50:28 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.633 14:50:28 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.891 14:50:28 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:28.891 14:50:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:28.891 14:50:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:28.891 [ 0]:0x1 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # nguid=7d6c696dcd5c4f60bff62b520caeb77d 00:13:28.891 14:50:28 -- target/ns_masking.sh@41 -- # [[ 7d6c696dcd5c4f60bff62b520caeb77d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.891 14:50:28 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:28.891 14:50:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:28.891 14:50:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:28.891 [ 1]:0x2 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:28.891 14:50:28 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:28.891 14:50:28 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.891 14:50:28 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:29.150 14:50:29 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:29.150 14:50:29 -- common/autotest_common.sh@638 -- # local es=0 00:13:29.150 14:50:29 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:29.150 14:50:29 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:29.150 14:50:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:29.150 14:50:29 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:29.150 14:50:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:29.150 14:50:29 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:29.150 14:50:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:29.150 14:50:29 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:29.150 14:50:29 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.150 14:50:29 -- common/autotest_common.sh@641 -- # es=1 00:13:29.150 14:50:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:29.150 14:50:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:29.150 14:50:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:29.150 14:50:29 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:29.150 14:50:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:29.150 14:50:29 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:29.150 [ 0]:0x2 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:29.150 14:50:29 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:29.150 14:50:29 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.150 14:50:29 -- target/ns_masking.sh@91 -- # disconnect 00:13:29.150 14:50:29 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.088 14:50:29 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:30.088 14:50:30 -- target/ns_masking.sh@95 -- # connect 2 00:13:30.088 14:50:30 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5e59467-8004-4de3-aa72-8308ade81e41 -a 192.168.100.8 -s 4420 -i 4 00:13:31.029 14:50:31 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:31.029 14:50:31 -- common/autotest_common.sh@1184 -- # local i=0 00:13:31.029 14:50:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.029 14:50:31 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:31.029 14:50:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:31.029 14:50:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:33.568 14:50:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:33.568 14:50:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:33.568 14:50:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.568 14:50:33 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:33.568 14:50:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.568 14:50:33 -- common/autotest_common.sh@1194 -- # return 0 00:13:33.568 14:50:33 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:33.568 14:50:33 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:33.568 14:50:33 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:33.568 14:50:33 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:33.568 14:50:33 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:13:33.568 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.568 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:33.568 [ 0]:0x1 00:13:33.568 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.568 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.568 14:50:33 -- target/ns_masking.sh@40 -- # nguid=7d6c696dcd5c4f60bff62b520caeb77d 00:13:33.568 14:50:33 -- target/ns_masking.sh@41 -- # [[ 7d6c696dcd5c4f60bff62b520caeb77d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.568 14:50:33 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:13:33.568 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.568 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:33.568 [ 1]:0x2 00:13:33.568 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:33.569 14:50:33 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.569 14:50:33 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.569 14:50:33 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:13:33.569 14:50:33 -- common/autotest_common.sh@638 -- # local es=0 00:13:33.569 14:50:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.569 14:50:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.569 14:50:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:33.569 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.569 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:33.569 14:50:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.569 14:50:33 -- common/autotest_common.sh@641 -- # es=1 00:13:33.569 14:50:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:33.569 14:50:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:33.569 14:50:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:33.569 14:50:33 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:13:33.569 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.569 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:33.569 [ 0]:0x2 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.569 14:50:33 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:33.569 14:50:33 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.569 14:50:33 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.569 14:50:33 -- common/autotest_common.sh@638 -- # local es=0 00:13:33.569 14:50:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.569 14:50:33 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.569 14:50:33 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:33.569 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.569 14:50:33 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:33.569 14:50:33 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:33.569 14:50:33 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:33.827 [2024-04-26 14:50:33.773514] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:33.827 request: 00:13:33.827 { 00:13:33.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.827 "nsid": 2, 00:13:33.827 "host": "nqn.2016-06.io.spdk:host1", 00:13:33.827 "method": "nvmf_ns_remove_host", 00:13:33.827 "req_id": 1 00:13:33.827 } 00:13:33.827 Got JSON-RPC error response 00:13:33.827 response: 00:13:33.827 { 00:13:33.827 "code": -32602, 00:13:33.827 "message": "Invalid parameters" 00:13:33.827 } 00:13:33.827 14:50:33 -- common/autotest_common.sh@641 -- # es=1 00:13:33.827 14:50:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:33.827 14:50:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:33.827 14:50:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:33.827 14:50:33 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:13:33.827 14:50:33 -- common/autotest_common.sh@638 -- # local es=0 00:13:33.827 14:50:33 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:13:33.828 14:50:33 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:13:33.828 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.828 14:50:33 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:13:33.828 14:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:33.828 14:50:33 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:13:33.828 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.828 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x1 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:33.828 14:50:33 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.828 14:50:33 -- common/autotest_common.sh@641 -- # es=1 00:13:33.828 14:50:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:33.828 14:50:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:33.828 14:50:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:33.828 14:50:33 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:13:33.828 14:50:33 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:33.828 14:50:33 -- target/ns_masking.sh@39 -- # grep 0x2 00:13:33.828 [ 0]:0x2 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:33.828 14:50:33 -- target/ns_masking.sh@40 -- # nguid=a0a2dd630a2843eeaeb6cb2b42301f46 00:13:33.828 14:50:33 -- target/ns_masking.sh@41 -- # [[ a0a2dd630a2843eeaeb6cb2b42301f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:33.828 14:50:33 -- target/ns_masking.sh@108 -- # disconnect 00:13:33.828 14:50:33 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.767 14:50:34 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.767 14:50:34 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:34.767 14:50:34 -- target/ns_masking.sh@114 -- # nvmftestfini 00:13:34.767 14:50:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:34.767 14:50:34 -- nvmf/common.sh@117 -- # sync 00:13:34.767 14:50:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:34.767 14:50:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:34.767 14:50:34 -- nvmf/common.sh@120 -- # set +e 00:13:34.767 14:50:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.767 14:50:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:34.767 rmmod nvme_rdma 00:13:35.028 rmmod nvme_fabrics 00:13:35.028 14:50:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.028 14:50:34 -- nvmf/common.sh@124 -- # set -e 00:13:35.028 14:50:34 -- nvmf/common.sh@125 -- # return 0 00:13:35.028 14:50:34 -- nvmf/common.sh@478 -- # '[' -n 185666 ']' 00:13:35.028 14:50:34 -- nvmf/common.sh@479 -- # killprocess 185666 00:13:35.028 14:50:34 -- common/autotest_common.sh@936 -- # '[' -z 185666 ']' 00:13:35.028 14:50:34 -- common/autotest_common.sh@940 -- # kill -0 185666 00:13:35.028 14:50:34 -- common/autotest_common.sh@941 -- # uname 00:13:35.028 14:50:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.028 14:50:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 185666 00:13:35.028 14:50:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.028 14:50:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.028 14:50:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 185666' 00:13:35.028 killing process with pid 185666 00:13:35.028 14:50:34 -- common/autotest_common.sh@955 -- # kill 185666 00:13:35.028 14:50:34 -- common/autotest_common.sh@960 -- # wait 185666 00:13:35.598 [2024-04-26 14:50:35.410914] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:36.980 14:50:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:36.980 14:50:36 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:36.980 00:13:36.980 real 0m21.600s 00:13:36.980 user 1m18.007s 00:13:36.980 sys 0m2.972s 00:13:36.980 14:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.980 14:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.980 ************************************ 00:13:36.980 END TEST nvmf_ns_masking 00:13:36.980 ************************************ 00:13:36.980 14:50:37 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:36.980 14:50:37 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:36.980 14:50:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:36.980 14:50:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.980 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:37.240 ************************************ 00:13:37.240 START TEST nvmf_nvme_cli 00:13:37.240 ************************************ 00:13:37.240 14:50:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:37.240 * Looking for test storage... 00:13:37.240 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:37.240 14:50:37 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.240 14:50:37 -- nvmf/common.sh@7 -- # uname -s 00:13:37.240 14:50:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.240 14:50:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.240 14:50:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.240 14:50:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.240 14:50:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.240 14:50:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.240 14:50:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.240 14:50:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.240 14:50:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.240 14:50:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.240 14:50:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:37.240 14:50:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:37.240 14:50:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.240 14:50:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.240 14:50:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.240 14:50:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.240 14:50:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:37.240 14:50:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.240 14:50:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.240 14:50:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.240 14:50:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.240 14:50:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.240 14:50:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.240 14:50:37 -- paths/export.sh@5 -- # export PATH 00:13:37.240 14:50:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.240 14:50:37 -- nvmf/common.sh@47 -- # : 0 00:13:37.240 14:50:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.240 14:50:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.240 14:50:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.240 14:50:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.240 14:50:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.240 14:50:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.240 14:50:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.240 14:50:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.240 14:50:37 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.240 14:50:37 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.240 14:50:37 -- target/nvme_cli.sh@14 -- # devs=() 00:13:37.240 14:50:37 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:37.240 14:50:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:37.240 14:50:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.240 14:50:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:37.240 14:50:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:37.240 14:50:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:37.240 14:50:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.240 14:50:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.240 14:50:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.240 14:50:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:37.240 14:50:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:37.240 14:50:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.240 14:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:39.146 14:50:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:39.146 14:50:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.146 14:50:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.146 14:50:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.146 14:50:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.146 14:50:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.146 14:50:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.146 14:50:39 -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.146 14:50:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.146 14:50:39 -- nvmf/common.sh@296 -- # e810=() 00:13:39.146 14:50:39 -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.146 14:50:39 -- nvmf/common.sh@297 -- # x722=() 00:13:39.146 14:50:39 -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.146 14:50:39 -- nvmf/common.sh@298 -- # mlx=() 00:13:39.146 14:50:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.146 14:50:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.146 14:50:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.146 14:50:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:39.146 14:50:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:39.146 14:50:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:39.146 14:50:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.146 14:50:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.146 14:50:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:13:39.146 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:13:39.146 14:50:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.146 14:50:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.146 14:50:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:13:39.146 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:13:39.146 14:50:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:39.146 14:50:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.147 14:50:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.147 14:50:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.147 14:50:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:13:39.147 Found net devices under 0000:09:00.0: mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.147 14:50:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.147 14:50:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.147 14:50:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:13:39.147 Found net devices under 0000:09:00.1: mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.147 14:50:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:39.147 14:50:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:39.147 14:50:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:39.147 14:50:39 -- nvmf/common.sh@58 -- # uname 00:13:39.147 14:50:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:39.147 14:50:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:39.147 14:50:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:39.147 14:50:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:39.147 14:50:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:39.147 14:50:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:39.147 14:50:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:39.147 14:50:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:39.147 14:50:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:39.147 14:50:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:39.147 14:50:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:39.147 14:50:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.147 14:50:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:39.147 14:50:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:39.147 14:50:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.147 14:50:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@105 -- # continue 2 00:13:39.147 14:50:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@105 -- # continue 2 00:13:39.147 14:50:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:39.147 14:50:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.147 14:50:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:39.147 14:50:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:39.147 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.147 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:13:39.147 altname enp9s0f0np0 00:13:39.147 inet 192.168.100.8/24 scope global mlx_0_0 00:13:39.147 valid_lft forever preferred_lft forever 00:13:39.147 14:50:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:39.147 14:50:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.147 14:50:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:39.147 14:50:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:39.147 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.147 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:13:39.147 altname enp9s0f1np1 00:13:39.147 inet 192.168.100.9/24 scope global mlx_0_1 00:13:39.147 valid_lft forever preferred_lft forever 00:13:39.147 14:50:39 -- nvmf/common.sh@411 -- # return 0 00:13:39.147 14:50:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:39.147 14:50:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:39.147 14:50:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:39.147 14:50:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:39.147 14:50:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.147 14:50:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:39.147 14:50:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:39.147 14:50:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.147 14:50:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:39.147 14:50:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@105 -- # continue 2 00:13:39.147 14:50:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.147 14:50:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.147 14:50:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@105 -- # continue 2 00:13:39.147 14:50:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:39.147 14:50:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.147 14:50:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:39.147 14:50:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.147 14:50:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.147 14:50:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:39.147 192.168.100.9' 00:13:39.147 14:50:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:39.147 192.168.100.9' 00:13:39.147 14:50:39 -- nvmf/common.sh@446 -- # head -n 1 00:13:39.147 14:50:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:39.147 14:50:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:39.147 192.168.100.9' 00:13:39.147 14:50:39 -- nvmf/common.sh@447 -- # tail -n +2 00:13:39.147 14:50:39 -- nvmf/common.sh@447 -- # head -n 1 00:13:39.147 14:50:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:39.147 14:50:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:39.147 14:50:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:39.147 14:50:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:39.147 14:50:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:39.147 14:50:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:39.409 14:50:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:39.409 14:50:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:39.409 14:50:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:39.409 14:50:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.409 14:50:39 -- nvmf/common.sh@470 -- # nvmfpid=189778 00:13:39.409 14:50:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.409 14:50:39 -- nvmf/common.sh@471 -- # waitforlisten 189778 00:13:39.409 14:50:39 -- common/autotest_common.sh@817 -- # '[' -z 189778 ']' 00:13:39.409 14:50:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.409 14:50:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.409 14:50:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.409 14:50:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.409 14:50:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.409 [2024-04-26 14:50:39.324244] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:39.409 [2024-04-26 14:50:39.324384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.409 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.409 [2024-04-26 14:50:39.452088] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.670 [2024-04-26 14:50:39.703901] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.670 [2024-04-26 14:50:39.703979] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.670 [2024-04-26 14:50:39.704007] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.670 [2024-04-26 14:50:39.704030] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.670 [2024-04-26 14:50:39.704048] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.670 [2024-04-26 14:50:39.704182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.670 [2024-04-26 14:50:39.704250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.671 [2024-04-26 14:50:39.704340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.671 [2024-04-26 14:50:39.704347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.238 14:50:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:40.238 14:50:40 -- common/autotest_common.sh@850 -- # return 0 00:13:40.238 14:50:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:40.238 14:50:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:40.238 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.238 14:50:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.238 14:50:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:40.238 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.238 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.498 [2024-04-26 14:50:40.320415] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f734cf48940) succeed. 00:13:40.498 [2024-04-26 14:50:40.332525] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f734cf04940) succeed. 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 Malloc0 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 Malloc1 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 [2024-04-26 14:50:40.816390] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:40.757 14:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.757 14:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:40.757 14:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.757 14:50:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -a 192.168.100.8 -s 4420 00:13:41.016 00:13:41.016 Discovery Log Number of Records 2, Generation counter 2 00:13:41.016 =====Discovery Log Entry 0====== 00:13:41.016 trtype: rdma 00:13:41.016 adrfam: ipv4 00:13:41.016 subtype: current discovery subsystem 00:13:41.016 treq: not required 00:13:41.016 portid: 0 00:13:41.016 trsvcid: 4420 00:13:41.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:41.016 traddr: 192.168.100.8 00:13:41.016 eflags: explicit discovery connections, duplicate discovery information 00:13:41.016 rdma_prtype: not specified 00:13:41.016 rdma_qptype: connected 00:13:41.016 rdma_cms: rdma-cm 00:13:41.016 rdma_pkey: 0x0000 00:13:41.016 =====Discovery Log Entry 1====== 00:13:41.016 trtype: rdma 00:13:41.016 adrfam: ipv4 00:13:41.016 subtype: nvme subsystem 00:13:41.016 treq: not required 00:13:41.016 portid: 0 00:13:41.016 trsvcid: 4420 00:13:41.016 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:41.016 traddr: 192.168.100.8 00:13:41.016 eflags: none 00:13:41.016 rdma_prtype: not specified 00:13:41.016 rdma_qptype: connected 00:13:41.016 rdma_cms: rdma-cm 00:13:41.017 rdma_pkey: 0x0000 00:13:41.017 14:50:40 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:41.017 14:50:40 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:41.017 14:50:40 -- nvmf/common.sh@511 -- # local dev _ 00:13:41.017 14:50:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:41.017 14:50:40 -- nvmf/common.sh@510 -- # nvme list 00:13:41.017 14:50:40 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:41.017 14:50:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:41.017 14:50:40 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:41.017 14:50:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:41.017 14:50:40 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:41.017 14:50:40 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:44.306 14:50:44 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:44.306 14:50:44 -- common/autotest_common.sh@1184 -- # local i=0 00:13:44.306 14:50:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.306 14:50:44 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:44.306 14:50:44 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:44.306 14:50:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:46.848 14:50:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:46.848 14:50:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:46.848 14:50:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.848 14:50:46 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:46.848 14:50:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.848 14:50:46 -- common/autotest_common.sh@1194 -- # return 0 00:13:46.848 14:50:46 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:46.848 14:50:46 -- nvmf/common.sh@511 -- # local dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@510 -- # nvme list 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:46.848 /dev/nvme0n1 ]] 00:13:46.848 14:50:46 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:46.848 14:50:46 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:46.848 14:50:46 -- nvmf/common.sh@511 -- # local dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@510 -- # nvme list 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:46.848 14:50:46 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:46.848 14:50:46 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:46.848 14:50:46 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:46.848 14:50:46 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.755 14:50:48 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.755 14:50:48 -- common/autotest_common.sh@1205 -- # local i=0 00:13:48.755 14:50:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:48.755 14:50:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.755 14:50:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:48.755 14:50:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.755 14:50:48 -- common/autotest_common.sh@1217 -- # return 0 00:13:48.755 14:50:48 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:48.755 14:50:48 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.755 14:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.755 14:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.014 14:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.014 14:50:48 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:49.014 14:50:48 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:49.014 14:50:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:49.014 14:50:48 -- nvmf/common.sh@117 -- # sync 00:13:49.014 14:50:48 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:49.014 14:50:48 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:49.014 14:50:48 -- nvmf/common.sh@120 -- # set +e 00:13:49.014 14:50:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.014 14:50:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:49.014 rmmod nvme_rdma 00:13:49.014 rmmod nvme_fabrics 00:13:49.014 14:50:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.014 14:50:48 -- nvmf/common.sh@124 -- # set -e 00:13:49.014 14:50:48 -- nvmf/common.sh@125 -- # return 0 00:13:49.014 14:50:48 -- nvmf/common.sh@478 -- # '[' -n 189778 ']' 00:13:49.014 14:50:48 -- nvmf/common.sh@479 -- # killprocess 189778 00:13:49.014 14:50:48 -- common/autotest_common.sh@936 -- # '[' -z 189778 ']' 00:13:49.014 14:50:48 -- common/autotest_common.sh@940 -- # kill -0 189778 00:13:49.014 14:50:48 -- common/autotest_common.sh@941 -- # uname 00:13:49.014 14:50:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.014 14:50:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 189778 00:13:49.014 14:50:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:49.014 14:50:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:49.014 14:50:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 189778' 00:13:49.014 killing process with pid 189778 00:13:49.014 14:50:48 -- common/autotest_common.sh@955 -- # kill 189778 00:13:49.014 14:50:48 -- common/autotest_common.sh@960 -- # wait 189778 00:13:49.584 [2024-04-26 14:50:49.478741] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:51.494 14:50:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:51.494 14:50:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:51.494 00:13:51.494 real 0m14.069s 00:13:51.494 user 0m44.173s 00:13:51.494 sys 0m2.285s 00:13:51.494 14:50:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:51.494 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:51.494 ************************************ 00:13:51.494 END TEST nvmf_nvme_cli 00:13:51.494 ************************************ 00:13:51.494 14:50:51 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:51.494 14:50:51 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:51.494 14:50:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:51.494 14:50:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.494 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:51.494 ************************************ 00:13:51.494 START TEST nvmf_host_management 00:13:51.494 ************************************ 00:13:51.494 14:50:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:51.494 * Looking for test storage... 00:13:51.494 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:51.494 14:50:51 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.494 14:50:51 -- nvmf/common.sh@7 -- # uname -s 00:13:51.494 14:50:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.494 14:50:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.494 14:50:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.494 14:50:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.494 14:50:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.494 14:50:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.494 14:50:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.494 14:50:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.494 14:50:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.494 14:50:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.494 14:50:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:13:51.494 14:50:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:13:51.494 14:50:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.494 14:50:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.494 14:50:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.494 14:50:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.494 14:50:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:51.494 14:50:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.494 14:50:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.494 14:50:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.494 14:50:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.494 14:50:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.494 14:50:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.494 14:50:51 -- paths/export.sh@5 -- # export PATH 00:13:51.494 14:50:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.494 14:50:51 -- nvmf/common.sh@47 -- # : 0 00:13:51.494 14:50:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.494 14:50:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.494 14:50:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.494 14:50:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.494 14:50:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.494 14:50:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.494 14:50:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.495 14:50:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.495 14:50:51 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.495 14:50:51 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.495 14:50:51 -- target/host_management.sh@105 -- # nvmftestinit 00:13:51.495 14:50:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:51.495 14:50:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.495 14:50:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:51.495 14:50:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:51.495 14:50:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:51.495 14:50:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.495 14:50:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.495 14:50:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.495 14:50:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:51.495 14:50:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:51.495 14:50:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.495 14:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.402 14:50:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.402 14:50:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.402 14:50:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.402 14:50:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.402 14:50:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.402 14:50:53 -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.402 14:50:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@296 -- # e810=() 00:13:53.402 14:50:53 -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.402 14:50:53 -- nvmf/common.sh@297 -- # x722=() 00:13:53.402 14:50:53 -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.402 14:50:53 -- nvmf/common.sh@298 -- # mlx=() 00:13:53.402 14:50:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.402 14:50:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.402 14:50:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:13:53.402 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:13:53.402 14:50:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.402 14:50:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:13:53.402 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:13:53.402 14:50:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.402 14:50:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.402 14:50:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.402 14:50:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:13:53.402 Found net devices under 0000:09:00.0: mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.402 14:50:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.402 14:50:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:13:53.402 Found net devices under 0000:09:00.1: mlx_0_1 00:13:53.402 14:50:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.402 14:50:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:53.402 14:50:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:53.402 14:50:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:53.402 14:50:53 -- nvmf/common.sh@58 -- # uname 00:13:53.402 14:50:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:53.402 14:50:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:53.402 14:50:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:53.402 14:50:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:53.402 14:50:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:53.402 14:50:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:53.402 14:50:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:53.402 14:50:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:53.402 14:50:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:53.402 14:50:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:53.402 14:50:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:53.402 14:50:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:53.402 14:50:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.402 14:50:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@105 -- # continue 2 00:13:53.402 14:50:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:53.402 14:50:53 -- nvmf/common.sh@105 -- # continue 2 00:13:53.402 14:50:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:53.402 14:50:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.402 14:50:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:53.402 14:50:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:53.402 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:53.402 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:13:53.402 altname enp9s0f0np0 00:13:53.402 inet 192.168.100.8/24 scope global mlx_0_0 00:13:53.402 valid_lft forever preferred_lft forever 00:13:53.402 14:50:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:53.402 14:50:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:53.402 14:50:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.402 14:50:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.402 14:50:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:53.402 14:50:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:53.402 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:53.402 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:13:53.402 altname enp9s0f1np1 00:13:53.402 inet 192.168.100.9/24 scope global mlx_0_1 00:13:53.402 valid_lft forever preferred_lft forever 00:13:53.402 14:50:53 -- nvmf/common.sh@411 -- # return 0 00:13:53.402 14:50:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:53.402 14:50:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:53.402 14:50:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:53.402 14:50:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:53.402 14:50:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:53.402 14:50:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:53.402 14:50:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.402 14:50:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:53.402 14:50:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.402 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:53.402 14:50:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:53.402 14:50:53 -- nvmf/common.sh@105 -- # continue 2 00:13:53.403 14:50:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.403 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.403 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:53.403 14:50:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.403 14:50:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:53.403 14:50:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:53.403 14:50:53 -- nvmf/common.sh@105 -- # continue 2 00:13:53.403 14:50:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:53.403 14:50:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:53.403 14:50:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:53.403 14:50:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:53.403 14:50:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.403 14:50:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.403 14:50:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:53.403 14:50:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:53.403 14:50:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:53.662 14:50:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:53.662 14:50:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.662 14:50:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.662 14:50:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:53.662 192.168.100.9' 00:13:53.662 14:50:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:53.662 192.168.100.9' 00:13:53.662 14:50:53 -- nvmf/common.sh@446 -- # head -n 1 00:13:53.662 14:50:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:53.662 14:50:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:53.662 192.168.100.9' 00:13:53.662 14:50:53 -- nvmf/common.sh@447 -- # tail -n +2 00:13:53.662 14:50:53 -- nvmf/common.sh@447 -- # head -n 1 00:13:53.662 14:50:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:53.662 14:50:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:53.662 14:50:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:53.662 14:50:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:53.662 14:50:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:53.662 14:50:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:53.662 14:50:53 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:53.662 14:50:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:53.662 14:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.662 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:13:53.662 ************************************ 00:13:53.662 START TEST nvmf_host_management 00:13:53.662 ************************************ 00:13:53.662 14:50:53 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:53.662 14:50:53 -- target/host_management.sh@69 -- # starttarget 00:13:53.662 14:50:53 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:53.662 14:50:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:53.662 14:50:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:53.662 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:13:53.662 14:50:53 -- nvmf/common.sh@470 -- # nvmfpid=192903 00:13:53.662 14:50:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:53.662 14:50:53 -- nvmf/common.sh@471 -- # waitforlisten 192903 00:13:53.662 14:50:53 -- common/autotest_common.sh@817 -- # '[' -z 192903 ']' 00:13:53.662 14:50:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.662 14:50:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:53.662 14:50:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.662 14:50:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:53.662 14:50:53 -- common/autotest_common.sh@10 -- # set +x 00:13:53.662 [2024-04-26 14:50:53.680369] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:53.662 [2024-04-26 14:50:53.680523] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.922 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.922 [2024-04-26 14:50:53.812217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.183 [2024-04-26 14:50:54.073036] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.183 [2024-04-26 14:50:54.073117] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.183 [2024-04-26 14:50:54.073155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.183 [2024-04-26 14:50:54.073180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.183 [2024-04-26 14:50:54.073200] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.183 [2024-04-26 14:50:54.073308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.183 [2024-04-26 14:50:54.073378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.183 [2024-04-26 14:50:54.073428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.183 [2024-04-26 14:50:54.073435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:54.751 14:50:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:54.751 14:50:54 -- common/autotest_common.sh@850 -- # return 0 00:13:54.751 14:50:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:54.751 14:50:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:54.751 14:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:54.751 14:50:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.751 14:50:54 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:54.751 14:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.751 14:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:54.751 [2024-04-26 14:50:54.717869] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000283c0/0x7fdd54298940) succeed. 00:13:54.751 [2024-04-26 14:50:54.728894] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028540/0x7fdd54254940) succeed. 00:13:55.011 14:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.011 14:50:55 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:55.011 14:50:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:55.011 14:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:55.011 14:50:55 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:55.011 14:50:55 -- target/host_management.sh@23 -- # cat 00:13:55.011 14:50:55 -- target/host_management.sh@30 -- # rpc_cmd 00:13:55.011 14:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.011 14:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 Malloc0 00:13:55.270 [2024-04-26 14:50:55.131685] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:55.270 14:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.270 14:50:55 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:55.270 14:50:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.270 14:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 14:50:55 -- target/host_management.sh@73 -- # perfpid=193146 00:13:55.270 14:50:55 -- target/host_management.sh@74 -- # waitforlisten 193146 /var/tmp/bdevperf.sock 00:13:55.270 14:50:55 -- common/autotest_common.sh@817 -- # '[' -z 193146 ']' 00:13:55.270 14:50:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.270 14:50:55 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:55.270 14:50:55 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:55.270 14:50:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:55.270 14:50:55 -- nvmf/common.sh@521 -- # config=() 00:13:55.270 14:50:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.270 14:50:55 -- nvmf/common.sh@521 -- # local subsystem config 00:13:55.270 14:50:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:55.270 14:50:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:55.270 14:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 14:50:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:55.270 { 00:13:55.270 "params": { 00:13:55.270 "name": "Nvme$subsystem", 00:13:55.270 "trtype": "$TEST_TRANSPORT", 00:13:55.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.270 "adrfam": "ipv4", 00:13:55.270 "trsvcid": "$NVMF_PORT", 00:13:55.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.270 "hdgst": ${hdgst:-false}, 00:13:55.270 "ddgst": ${ddgst:-false} 00:13:55.270 }, 00:13:55.270 "method": "bdev_nvme_attach_controller" 00:13:55.270 } 00:13:55.270 EOF 00:13:55.270 )") 00:13:55.270 14:50:55 -- nvmf/common.sh@543 -- # cat 00:13:55.270 14:50:55 -- nvmf/common.sh@545 -- # jq . 00:13:55.270 14:50:55 -- nvmf/common.sh@546 -- # IFS=, 00:13:55.270 14:50:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:55.270 "params": { 00:13:55.270 "name": "Nvme0", 00:13:55.270 "trtype": "rdma", 00:13:55.270 "traddr": "192.168.100.8", 00:13:55.270 "adrfam": "ipv4", 00:13:55.270 "trsvcid": "4420", 00:13:55.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:55.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:55.270 "hdgst": false, 00:13:55.270 "ddgst": false 00:13:55.270 }, 00:13:55.270 "method": "bdev_nvme_attach_controller" 00:13:55.270 }' 00:13:55.270 [2024-04-26 14:50:55.245123] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:55.270 [2024-04-26 14:50:55.245281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193146 ] 00:13:55.270 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.530 [2024-04-26 14:50:55.371989] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.530 [2024-04-26 14:50:55.605790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.098 Running I/O for 10 seconds... 00:13:56.356 14:50:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:56.356 14:50:56 -- common/autotest_common.sh@850 -- # return 0 00:13:56.356 14:50:56 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:56.356 14:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.356 14:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:56.356 14:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.356 14:50:56 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:56.356 14:50:56 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:56.356 14:50:56 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:56.356 14:50:56 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:56.356 14:50:56 -- target/host_management.sh@52 -- # local ret=1 00:13:56.356 14:50:56 -- target/host_management.sh@53 -- # local i 00:13:56.356 14:50:56 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:56.356 14:50:56 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:56.356 14:50:56 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:56.356 14:50:56 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:56.356 14:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.356 14:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:56.356 14:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.356 14:50:56 -- target/host_management.sh@55 -- # read_io_count=307 00:13:56.357 14:50:56 -- target/host_management.sh@58 -- # '[' 307 -ge 100 ']' 00:13:56.357 14:50:56 -- target/host_management.sh@59 -- # ret=0 00:13:56.357 14:50:56 -- target/host_management.sh@60 -- # break 00:13:56.357 14:50:56 -- target/host_management.sh@64 -- # return 0 00:13:56.357 14:50:56 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:56.357 14:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.357 14:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:56.357 14:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.357 14:50:56 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:56.357 14:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.357 14:50:56 -- common/autotest_common.sh@10 -- # set +x 00:13:56.357 14:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.357 14:50:56 -- target/host_management.sh@87 -- # sleep 1 00:13:57.343 [2024-04-26 14:50:57.259200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.343 [2024-04-26 14:50:57.259300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.259332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.343 [2024-04-26 14:50:57.259361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.259382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.343 [2024-04-26 14:50:57.259401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.259443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.343 [2024-04-26 14:50:57.259462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:13:57.343 [2024-04-26 14:50:57.261131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:57.343 [2024-04-26 14:50:57.261195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfc40 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfb80 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfac0 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afa00 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969f940 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968f880 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967f7c0 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966f700 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f640 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f580 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f4c0 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f400 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.261969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f340 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.261993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f280 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199d2e00 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199c2d40 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199b2c80 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199a2bc0 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019992b00 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019982a40 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019972980 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199628c0 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 [2024-04-26 14:50:57.262749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019952800 len:0x10000 key:0x187500 00:13:57.343 [2024-04-26 14:50:57.262782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.343 14:50:57 -- target/host_management.sh@91 -- # kill -9 193146 00:13:57.343 [2024-04-26 14:50:57.262835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019942740 len:0x10000 key:0x187500 00:13:57.343 14:50:57 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:57.343 14:50:57 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:57.343 14:50:57 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:57.343 14:50:57 -- nvmf/common.sh@521 -- # config=() 00:13:57.343 14:50:57 -- nvmf/common.sh@521 -- # local subsystem config 00:13:57.343 14:50:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:57.343 14:50:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:57.343 { 00:13:57.343 "params": { 00:13:57.343 "name": "Nvme$subsystem", 00:13:57.343 "trtype": "$TEST_TRANSPORT", 00:13:57.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.343 "adrfam": "ipv4", 00:13:57.344 "trsvcid": "$NVMF_PORT", 00:13:57.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.344 "hdgst": ${hdgst:-false}, 00:13:57.344 "ddgst": ${ddgst:-false} 00:13:57.344 }, 00:13:57.344 "method": "bdev_nvme_attach_controller" 00:13:57.344 } 00:13:57.344 EOF 00:13:57.344 )") 00:13:57.344 14:50:57 -- nvmf/common.sh@543 -- # cat 00:13:57.344 14:50:57 -- nvmf/common.sh@545 -- # jq . 00:13:57.344 14:50:57 -- nvmf/common.sh@546 -- # IFS=, 00:13:57.344 14:50:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:57.344 "params": { 00:13:57.344 "name": "Nvme0", 00:13:57.344 "trtype": "rdma", 00:13:57.344 "traddr": "192.168.100.8", 00:13:57.344 "adrfam": "ipv4", 00:13:57.344 "trsvcid": "4420", 00:13:57.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:57.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:57.344 "hdgst": false, 00:13:57.344 "ddgst": false 00:13:57.344 }, 00:13:57.344 "method": "bdev_nvme_attach_controller" 00:13:57.344 }' 00:13:57.344 [2024-04-26 14:50:57.339417] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:57.344 [2024-04-26 14:50:57.339556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193426 ] 00:13:57.344 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.622 [2024-04-26 14:50:57.462638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.622 [2024-04-26 14:50:57.694679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.216 Running I/O for 1 seconds... 00:13:59.198 00:13:59.198 Latency(us) 00:13:59.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.198 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:59.198 Verification LBA range: start 0x0 length 0x400 00:13:59.198 Nvme0n1 : 1.02 2125.27 132.83 0.00 0.00 29429.59 1438.15 46603.38 00:13:59.198 =================================================================================================================== 00:13:59.198 Total : 2125.27 132.83 0.00 0.00 29429.59 1438.15 46603.38 00:14:00.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 193146 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:14:00.185 14:51:00 -- target/host_management.sh@102 -- # stoptarget 00:14:00.185 14:51:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:00.185 14:51:00 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:00.185 14:51:00 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:00.185 14:51:00 -- target/host_management.sh@40 -- # nvmftestfini 00:14:00.185 14:51:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:00.185 14:51:00 -- nvmf/common.sh@117 -- # sync 00:14:00.185 14:51:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:00.185 14:51:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:00.185 14:51:00 -- nvmf/common.sh@120 -- # set +e 00:14:00.185 14:51:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.185 14:51:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:00.185 rmmod nvme_rdma 00:14:00.185 rmmod nvme_fabrics 00:14:00.185 14:51:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.185 14:51:00 -- nvmf/common.sh@124 -- # set -e 00:14:00.185 14:51:00 -- nvmf/common.sh@125 -- # return 0 00:14:00.185 14:51:00 -- nvmf/common.sh@478 -- # '[' -n 192903 ']' 00:14:00.185 14:51:00 -- nvmf/common.sh@479 -- # killprocess 192903 00:14:00.185 14:51:00 -- common/autotest_common.sh@936 -- # '[' -z 192903 ']' 00:14:00.185 14:51:00 -- common/autotest_common.sh@940 -- # kill -0 192903 00:14:00.185 14:51:00 -- common/autotest_common.sh@941 -- # uname 00:14:00.185 14:51:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.185 14:51:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 192903 00:14:00.185 14:51:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:00.185 14:51:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:00.185 14:51:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 192903' 00:14:00.185 killing process with pid 192903 00:14:00.185 14:51:00 -- common/autotest_common.sh@955 -- # kill 192903 00:14:00.185 14:51:00 -- common/autotest_common.sh@960 -- # wait 192903 00:14:00.813 [2024-04-26 14:51:00.772375] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:02.271 [2024-04-26 14:51:02.062218] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:02.271 14:51:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:02.271 14:51:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:02.271 00:14:02.271 real 0m8.551s 00:14:02.271 user 0m34.771s 00:14:02.271 sys 0m1.484s 00:14:02.271 14:51:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:02.271 14:51:02 -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 ************************************ 00:14:02.271 END TEST nvmf_host_management 00:14:02.271 ************************************ 00:14:02.271 14:51:02 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:02.271 00:14:02.271 real 0m10.852s 00:14:02.271 user 0m35.652s 00:14:02.271 sys 0m3.000s 00:14:02.271 14:51:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:02.271 14:51:02 -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 ************************************ 00:14:02.271 END TEST nvmf_host_management 00:14:02.271 ************************************ 00:14:02.271 14:51:02 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:14:02.271 14:51:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:02.271 14:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:02.271 14:51:02 -- common/autotest_common.sh@10 -- # set +x 00:14:02.271 ************************************ 00:14:02.271 START TEST nvmf_lvol 00:14:02.271 ************************************ 00:14:02.271 14:51:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:14:02.271 * Looking for test storage... 00:14:02.271 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:02.271 14:51:02 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.271 14:51:02 -- nvmf/common.sh@7 -- # uname -s 00:14:02.271 14:51:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.271 14:51:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.271 14:51:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.271 14:51:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.271 14:51:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.271 14:51:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.271 14:51:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.271 14:51:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.271 14:51:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.271 14:51:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.546 14:51:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:02.546 14:51:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:02.546 14:51:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.546 14:51:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.546 14:51:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.546 14:51:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.546 14:51:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:02.546 14:51:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.546 14:51:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.546 14:51:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.546 14:51:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.546 14:51:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.546 14:51:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.546 14:51:02 -- paths/export.sh@5 -- # export PATH 00:14:02.546 14:51:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.546 14:51:02 -- nvmf/common.sh@47 -- # : 0 00:14:02.546 14:51:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.546 14:51:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.546 14:51:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.546 14:51:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.546 14:51:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.546 14:51:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.546 14:51:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.546 14:51:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:02.546 14:51:02 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:02.546 14:51:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:02.546 14:51:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.546 14:51:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:02.546 14:51:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:02.546 14:51:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:02.546 14:51:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.546 14:51:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.546 14:51:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.546 14:51:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:02.546 14:51:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:02.546 14:51:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.546 14:51:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.535 14:51:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:04.535 14:51:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.535 14:51:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.535 14:51:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.535 14:51:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.535 14:51:04 -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.535 14:51:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@296 -- # e810=() 00:14:04.535 14:51:04 -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.535 14:51:04 -- nvmf/common.sh@297 -- # x722=() 00:14:04.535 14:51:04 -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.535 14:51:04 -- nvmf/common.sh@298 -- # mlx=() 00:14:04.535 14:51:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.535 14:51:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.535 14:51:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:14:04.535 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:14:04.535 14:51:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:04.535 14:51:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:14:04.535 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:14:04.535 14:51:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:04.535 14:51:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.535 14:51:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.535 14:51:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:14:04.535 Found net devices under 0000:09:00.0: mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.535 14:51:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.535 14:51:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:14:04.535 Found net devices under 0000:09:00.1: mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.535 14:51:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:04.535 14:51:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:04.535 14:51:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:04.535 14:51:04 -- nvmf/common.sh@58 -- # uname 00:14:04.535 14:51:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:04.535 14:51:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:04.535 14:51:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:04.535 14:51:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:04.535 14:51:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:04.535 14:51:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:04.535 14:51:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:04.535 14:51:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:04.535 14:51:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:04.535 14:51:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:04.535 14:51:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:04.535 14:51:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:04.535 14:51:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:04.535 14:51:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@105 -- # continue 2 00:14:04.535 14:51:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@105 -- # continue 2 00:14:04.535 14:51:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:04.535 14:51:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.535 14:51:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:04.535 14:51:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:04.535 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:04.535 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:14:04.535 altname enp9s0f0np0 00:14:04.535 inet 192.168.100.8/24 scope global mlx_0_0 00:14:04.535 valid_lft forever preferred_lft forever 00:14:04.535 14:51:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:04.535 14:51:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.535 14:51:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:04.535 14:51:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:04.535 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:04.535 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:14:04.535 altname enp9s0f1np1 00:14:04.535 inet 192.168.100.9/24 scope global mlx_0_1 00:14:04.535 valid_lft forever preferred_lft forever 00:14:04.535 14:51:04 -- nvmf/common.sh@411 -- # return 0 00:14:04.535 14:51:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:04.535 14:51:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:04.535 14:51:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:04.535 14:51:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:04.535 14:51:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:04.535 14:51:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:04.535 14:51:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:04.535 14:51:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:04.535 14:51:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@105 -- # continue 2 00:14:04.535 14:51:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:04.535 14:51:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:04.535 14:51:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@105 -- # continue 2 00:14:04.535 14:51:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:04.535 14:51:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.535 14:51:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:04.535 14:51:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:04.535 14:51:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:04.535 14:51:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:04.535 192.168.100.9' 00:14:04.535 14:51:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:04.535 192.168.100.9' 00:14:04.535 14:51:04 -- nvmf/common.sh@446 -- # head -n 1 00:14:04.535 14:51:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:04.535 14:51:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:04.535 192.168.100.9' 00:14:04.535 14:51:04 -- nvmf/common.sh@447 -- # tail -n +2 00:14:04.535 14:51:04 -- nvmf/common.sh@447 -- # head -n 1 00:14:04.535 14:51:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:04.535 14:51:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:04.535 14:51:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:04.535 14:51:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:04.535 14:51:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:04.535 14:51:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:04.535 14:51:04 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:04.535 14:51:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:04.535 14:51:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:04.535 14:51:04 -- common/autotest_common.sh@10 -- # set +x 00:14:04.535 14:51:04 -- nvmf/common.sh@470 -- # nvmfpid=195793 00:14:04.535 14:51:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:04.535 14:51:04 -- nvmf/common.sh@471 -- # waitforlisten 195793 00:14:04.536 14:51:04 -- common/autotest_common.sh@817 -- # '[' -z 195793 ']' 00:14:04.536 14:51:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.536 14:51:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:04.536 14:51:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.536 14:51:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:04.536 14:51:04 -- common/autotest_common.sh@10 -- # set +x 00:14:04.536 [2024-04-26 14:51:04.492499] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:04.536 [2024-04-26 14:51:04.492621] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.536 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.816 [2024-04-26 14:51:04.624897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.816 [2024-04-26 14:51:04.879263] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.817 [2024-04-26 14:51:04.879327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.817 [2024-04-26 14:51:04.879358] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.817 [2024-04-26 14:51:04.879380] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.817 [2024-04-26 14:51:04.879398] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.817 [2024-04-26 14:51:04.879545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.817 [2024-04-26 14:51:04.879612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.817 [2024-04-26 14:51:04.879615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.444 14:51:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.444 14:51:05 -- common/autotest_common.sh@850 -- # return 0 00:14:05.444 14:51:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:05.444 14:51:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:05.444 14:51:05 -- common/autotest_common.sh@10 -- # set +x 00:14:05.444 14:51:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.444 14:51:05 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:05.722 [2024-04-26 14:51:05.665980] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7f46fe898940) succeed. 00:14:05.722 [2024-04-26 14:51:05.676956] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7f46fe852940) succeed. 00:14:05.994 14:51:05 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.277 14:51:06 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:06.277 14:51:06 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.555 14:51:06 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:06.555 14:51:06 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:06.826 14:51:06 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:07.099 14:51:07 -- target/nvmf_lvol.sh@29 -- # lvs=79186724-a5b5-4dbf-b144-444897532dc2 00:14:07.099 14:51:07 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 79186724-a5b5-4dbf-b144-444897532dc2 lvol 20 00:14:07.375 14:51:07 -- target/nvmf_lvol.sh@32 -- # lvol=5dfb5818-dd24-48d0-9ace-3bc479a8c4b3 00:14:07.375 14:51:07 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:07.652 14:51:07 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5dfb5818-dd24-48d0-9ace-3bc479a8c4b3 00:14:07.926 14:51:07 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:08.205 [2024-04-26 14:51:08.097099] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:08.205 14:51:08 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:08.484 14:51:08 -- target/nvmf_lvol.sh@42 -- # perf_pid=196729 00:14:08.484 14:51:08 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:08.484 14:51:08 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:08.484 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.497 14:51:09 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5dfb5818-dd24-48d0-9ace-3bc479a8c4b3 MY_SNAPSHOT 00:14:09.766 14:51:09 -- target/nvmf_lvol.sh@47 -- # snapshot=84b2cbac-f859-4a18-8b4e-5e3f4c16cca2 00:14:09.766 14:51:09 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5dfb5818-dd24-48d0-9ace-3bc479a8c4b3 30 00:14:10.038 14:51:09 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 84b2cbac-f859-4a18-8b4e-5e3f4c16cca2 MY_CLONE 00:14:10.303 14:51:10 -- target/nvmf_lvol.sh@49 -- # clone=fbac220f-e83a-4464-a04a-70a1f273d0da 00:14:10.303 14:51:10 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fbac220f-e83a-4464-a04a-70a1f273d0da 00:14:10.567 14:51:10 -- target/nvmf_lvol.sh@53 -- # wait 196729 00:14:20.566 Initializing NVMe Controllers 00:14:20.566 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.566 Controller IO queue size 128, less than required. 00:14:20.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.566 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:20.566 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:20.566 Initialization complete. Launching workers. 00:14:20.566 ======================================================== 00:14:20.566 Latency(us) 00:14:20.566 Device Information : IOPS MiB/s Average min max 00:14:20.566 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11626.60 45.42 11014.10 4610.49 149414.61 00:14:20.566 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11593.00 45.29 11045.82 4589.91 162018.47 00:14:20.566 ======================================================== 00:14:20.566 Total : 23219.60 90.70 11029.94 4589.91 162018.47 00:14:20.566 00:14:20.566 14:51:19 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.566 14:51:20 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5dfb5818-dd24-48d0-9ace-3bc479a8c4b3 00:14:20.566 14:51:20 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79186724-a5b5-4dbf-b144-444897532dc2 00:14:20.827 14:51:20 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:20.827 14:51:20 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:20.827 14:51:20 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:20.827 14:51:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:20.827 14:51:20 -- nvmf/common.sh@117 -- # sync 00:14:20.827 14:51:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:20.827 14:51:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:20.827 14:51:20 -- nvmf/common.sh@120 -- # set +e 00:14:20.827 14:51:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.827 14:51:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:20.827 rmmod nvme_rdma 00:14:20.827 rmmod nvme_fabrics 00:14:20.827 14:51:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.827 14:51:20 -- nvmf/common.sh@124 -- # set -e 00:14:20.827 14:51:20 -- nvmf/common.sh@125 -- # return 0 00:14:20.827 14:51:20 -- nvmf/common.sh@478 -- # '[' -n 195793 ']' 00:14:20.827 14:51:20 -- nvmf/common.sh@479 -- # killprocess 195793 00:14:20.827 14:51:20 -- common/autotest_common.sh@936 -- # '[' -z 195793 ']' 00:14:20.827 14:51:20 -- common/autotest_common.sh@940 -- # kill -0 195793 00:14:20.827 14:51:20 -- common/autotest_common.sh@941 -- # uname 00:14:20.827 14:51:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.827 14:51:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 195793 00:14:20.827 14:51:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:20.827 14:51:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:20.827 14:51:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 195793' 00:14:20.827 killing process with pid 195793 00:14:20.827 14:51:20 -- common/autotest_common.sh@955 -- # kill 195793 00:14:20.827 14:51:20 -- common/autotest_common.sh@960 -- # wait 195793 00:14:21.394 [2024-04-26 14:51:21.258012] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:22.777 14:51:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:22.777 14:51:22 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:22.777 00:14:22.777 real 0m20.499s 00:14:22.777 user 1m18.826s 00:14:22.777 sys 0m3.004s 00:14:22.777 14:51:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:22.777 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:14:22.777 ************************************ 00:14:22.777 END TEST nvmf_lvol 00:14:22.777 ************************************ 00:14:22.777 14:51:22 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:22.777 14:51:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:22.777 14:51:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.777 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.037 ************************************ 00:14:23.037 START TEST nvmf_lvs_grow 00:14:23.037 ************************************ 00:14:23.037 14:51:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:23.037 * Looking for test storage... 00:14:23.037 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:23.037 14:51:22 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.037 14:51:22 -- nvmf/common.sh@7 -- # uname -s 00:14:23.037 14:51:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.037 14:51:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.037 14:51:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.037 14:51:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.037 14:51:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.037 14:51:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.037 14:51:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.037 14:51:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.037 14:51:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.037 14:51:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.037 14:51:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:14:23.037 14:51:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:14:23.037 14:51:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.037 14:51:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.037 14:51:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.037 14:51:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.037 14:51:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:23.037 14:51:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.037 14:51:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.037 14:51:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.037 14:51:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.037 14:51:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.037 14:51:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.037 14:51:22 -- paths/export.sh@5 -- # export PATH 00:14:23.037 14:51:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.037 14:51:22 -- nvmf/common.sh@47 -- # : 0 00:14:23.037 14:51:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.037 14:51:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.037 14:51:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.037 14:51:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.037 14:51:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.037 14:51:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.037 14:51:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.037 14:51:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.037 14:51:22 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:23.037 14:51:22 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.037 14:51:22 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:23.037 14:51:22 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:23.037 14:51:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.037 14:51:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:23.037 14:51:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:23.037 14:51:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:23.037 14:51:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.037 14:51:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.037 14:51:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.037 14:51:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:23.037 14:51:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:23.037 14:51:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.037 14:51:22 -- common/autotest_common.sh@10 -- # set +x 00:14:24.951 14:51:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.951 14:51:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.951 14:51:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.951 14:51:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.951 14:51:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.951 14:51:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.951 14:51:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.951 14:51:24 -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.951 14:51:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.951 14:51:24 -- nvmf/common.sh@296 -- # e810=() 00:14:24.951 14:51:24 -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.951 14:51:24 -- nvmf/common.sh@297 -- # x722=() 00:14:24.951 14:51:24 -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.951 14:51:24 -- nvmf/common.sh@298 -- # mlx=() 00:14:24.951 14:51:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.951 14:51:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.951 14:51:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.951 14:51:24 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:24.951 14:51:24 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:24.951 14:51:24 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:24.951 14:51:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.951 14:51:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.951 14:51:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:14:24.951 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:14:24.951 14:51:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.951 14:51:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.951 14:51:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:14:24.951 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:14:24.951 14:51:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:24.951 14:51:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:24.951 14:51:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.951 14:51:24 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.952 14:51:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.952 14:51:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.952 14:51:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:14:24.952 Found net devices under 0000:09:00.0: mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.952 14:51:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.952 14:51:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.952 14:51:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.952 14:51:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:14:24.952 Found net devices under 0000:09:00.1: mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.952 14:51:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:24.952 14:51:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:24.952 14:51:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:24.952 14:51:24 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:24.952 14:51:24 -- nvmf/common.sh@58 -- # uname 00:14:24.952 14:51:24 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:24.952 14:51:24 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:24.952 14:51:24 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:24.952 14:51:24 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:24.952 14:51:24 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:24.952 14:51:24 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:24.952 14:51:24 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:24.952 14:51:24 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:24.952 14:51:24 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:24.952 14:51:24 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:24.952 14:51:24 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:24.952 14:51:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.952 14:51:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:24.952 14:51:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:24.952 14:51:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.952 14:51:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:24.952 14:51:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@105 -- # continue 2 00:14:24.952 14:51:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@105 -- # continue 2 00:14:24.952 14:51:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:24.952 14:51:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.952 14:51:24 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:24.952 14:51:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:24.952 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.952 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:14:24.952 altname enp9s0f0np0 00:14:24.952 inet 192.168.100.8/24 scope global mlx_0_0 00:14:24.952 valid_lft forever preferred_lft forever 00:14:24.952 14:51:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:24.952 14:51:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.952 14:51:24 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:24.952 14:51:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:24.952 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:24.952 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:14:24.952 altname enp9s0f1np1 00:14:24.952 inet 192.168.100.9/24 scope global mlx_0_1 00:14:24.952 valid_lft forever preferred_lft forever 00:14:24.952 14:51:24 -- nvmf/common.sh@411 -- # return 0 00:14:24.952 14:51:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:24.952 14:51:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:24.952 14:51:24 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:24.952 14:51:24 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:24.952 14:51:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:24.952 14:51:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:24.952 14:51:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:24.952 14:51:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:24.952 14:51:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:24.952 14:51:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@105 -- # continue 2 00:14:24.952 14:51:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:24.952 14:51:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:24.952 14:51:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@105 -- # continue 2 00:14:24.952 14:51:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:24.952 14:51:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.952 14:51:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:24.952 14:51:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:24.952 14:51:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:24.952 14:51:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:24.952 192.168.100.9' 00:14:24.952 14:51:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:24.952 192.168.100.9' 00:14:24.952 14:51:24 -- nvmf/common.sh@446 -- # head -n 1 00:14:24.952 14:51:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:24.952 14:51:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:24.952 192.168.100.9' 00:14:24.952 14:51:24 -- nvmf/common.sh@447 -- # tail -n +2 00:14:24.952 14:51:24 -- nvmf/common.sh@447 -- # head -n 1 00:14:24.952 14:51:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:24.952 14:51:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:24.952 14:51:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:24.952 14:51:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:24.952 14:51:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:24.952 14:51:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:24.952 14:51:24 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:24.953 14:51:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:24.953 14:51:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.953 14:51:24 -- common/autotest_common.sh@10 -- # set +x 00:14:24.953 14:51:24 -- nvmf/common.sh@470 -- # nvmfpid=200154 00:14:24.953 14:51:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:24.953 14:51:24 -- nvmf/common.sh@471 -- # waitforlisten 200154 00:14:24.953 14:51:24 -- common/autotest_common.sh@817 -- # '[' -z 200154 ']' 00:14:24.953 14:51:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.953 14:51:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.953 14:51:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.953 14:51:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.953 14:51:24 -- common/autotest_common.sh@10 -- # set +x 00:14:24.953 [2024-04-26 14:51:24.984645] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:24.953 [2024-04-26 14:51:24.984775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.213 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.213 [2024-04-26 14:51:25.111618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.472 [2024-04-26 14:51:25.356864] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.472 [2024-04-26 14:51:25.356953] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.472 [2024-04-26 14:51:25.356979] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.472 [2024-04-26 14:51:25.357003] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.472 [2024-04-26 14:51:25.357022] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.472 [2024-04-26 14:51:25.357077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.040 14:51:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:26.040 14:51:25 -- common/autotest_common.sh@850 -- # return 0 00:14:26.040 14:51:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:26.040 14:51:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:26.040 14:51:25 -- common/autotest_common.sh@10 -- # set +x 00:14:26.040 14:51:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.040 14:51:25 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:26.299 [2024-04-26 14:51:26.263959] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027940/0x7fe12694b940) succeed. 00:14:26.299 [2024-04-26 14:51:26.276233] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027ac0/0x7fe126907940) succeed. 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:26.557 14:51:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:26.557 14:51:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.557 14:51:26 -- common/autotest_common.sh@10 -- # set +x 00:14:26.557 ************************************ 00:14:26.557 START TEST lvs_grow_clean 00:14:26.557 ************************************ 00:14:26.557 14:51:26 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.557 14:51:26 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.817 14:51:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:26.817 14:51:26 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:27.076 14:51:27 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:27.076 14:51:27 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:27.076 14:51:27 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:27.334 14:51:27 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:27.334 14:51:27 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:27.334 14:51:27 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fb30ea23-89a9-4624-a928-0d1527dedabb lvol 150 00:14:27.594 14:51:27 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b07dfa20-608d-44d0-80c7-45d0f7a2522b 00:14:27.594 14:51:27 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.594 14:51:27 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:27.853 [2024-04-26 14:51:27.778984] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:27.853 [2024-04-26 14:51:27.779092] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:27.853 true 00:14:27.853 14:51:27 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:27.853 14:51:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:28.114 14:51:28 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:28.114 14:51:28 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.371 14:51:28 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b07dfa20-608d-44d0-80c7-45d0f7a2522b 00:14:28.629 14:51:28 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:28.890 [2024-04-26 14:51:28.754396] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:28.890 14:51:28 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:29.149 14:51:29 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=200732 00:14:29.149 14:51:29 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:29.149 14:51:29 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.149 14:51:29 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 200732 /var/tmp/bdevperf.sock 00:14:29.149 14:51:29 -- common/autotest_common.sh@817 -- # '[' -z 200732 ']' 00:14:29.149 14:51:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.149 14:51:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:29.149 14:51:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.149 14:51:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:29.149 14:51:29 -- common/autotest_common.sh@10 -- # set +x 00:14:29.149 [2024-04-26 14:51:29.092144] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:29.149 [2024-04-26 14:51:29.092277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200732 ] 00:14:29.149 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.149 [2024-04-26 14:51:29.220900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.409 [2024-04-26 14:51:29.467172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.978 14:51:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:29.978 14:51:30 -- common/autotest_common.sh@850 -- # return 0 00:14:29.978 14:51:30 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:30.544 Nvme0n1 00:14:30.544 14:51:30 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:30.544 [ 00:14:30.544 { 00:14:30.544 "name": "Nvme0n1", 00:14:30.544 "aliases": [ 00:14:30.544 "b07dfa20-608d-44d0-80c7-45d0f7a2522b" 00:14:30.544 ], 00:14:30.544 "product_name": "NVMe disk", 00:14:30.544 "block_size": 4096, 00:14:30.544 "num_blocks": 38912, 00:14:30.544 "uuid": "b07dfa20-608d-44d0-80c7-45d0f7a2522b", 00:14:30.544 "assigned_rate_limits": { 00:14:30.544 "rw_ios_per_sec": 0, 00:14:30.544 "rw_mbytes_per_sec": 0, 00:14:30.544 "r_mbytes_per_sec": 0, 00:14:30.544 "w_mbytes_per_sec": 0 00:14:30.544 }, 00:14:30.544 "claimed": false, 00:14:30.544 "zoned": false, 00:14:30.544 "supported_io_types": { 00:14:30.544 "read": true, 00:14:30.544 "write": true, 00:14:30.544 "unmap": true, 00:14:30.544 "write_zeroes": true, 00:14:30.544 "flush": true, 00:14:30.544 "reset": true, 00:14:30.544 "compare": true, 00:14:30.544 "compare_and_write": true, 00:14:30.545 "abort": true, 00:14:30.545 "nvme_admin": true, 00:14:30.545 "nvme_io": true 00:14:30.545 }, 00:14:30.545 "memory_domains": [ 00:14:30.545 { 00:14:30.545 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:30.545 "dma_device_type": 0 00:14:30.545 } 00:14:30.545 ], 00:14:30.545 "driver_specific": { 00:14:30.545 "nvme": [ 00:14:30.545 { 00:14:30.545 "trid": { 00:14:30.545 "trtype": "RDMA", 00:14:30.545 "adrfam": "IPv4", 00:14:30.545 "traddr": "192.168.100.8", 00:14:30.545 "trsvcid": "4420", 00:14:30.545 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:30.545 }, 00:14:30.545 "ctrlr_data": { 00:14:30.545 "cntlid": 1, 00:14:30.545 "vendor_id": "0x8086", 00:14:30.545 "model_number": "SPDK bdev Controller", 00:14:30.545 "serial_number": "SPDK0", 00:14:30.545 "firmware_revision": "24.05", 00:14:30.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:30.545 "oacs": { 00:14:30.545 "security": 0, 00:14:30.545 "format": 0, 00:14:30.545 "firmware": 0, 00:14:30.545 "ns_manage": 0 00:14:30.545 }, 00:14:30.545 "multi_ctrlr": true, 00:14:30.545 "ana_reporting": false 00:14:30.545 }, 00:14:30.545 "vs": { 00:14:30.545 "nvme_version": "1.3" 00:14:30.545 }, 00:14:30.545 "ns_data": { 00:14:30.545 "id": 1, 00:14:30.545 "can_share": true 00:14:30.545 } 00:14:30.545 } 00:14:30.545 ], 00:14:30.545 "mp_policy": "active_passive" 00:14:30.545 } 00:14:30.545 } 00:14:30.545 ] 00:14:30.805 14:51:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=200876 00:14:30.805 14:51:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:30.805 14:51:30 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:30.805 Running I/O for 10 seconds... 00:14:31.748 Latency(us) 00:14:31.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.748 Nvme0n1 : 1.00 16867.00 65.89 0.00 0.00 0.00 0.00 0.00 00:14:31.748 =================================================================================================================== 00:14:31.748 Total : 16867.00 65.89 0.00 0.00 0.00 0.00 0.00 00:14:31.748 00:14:32.685 14:51:32 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:32.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.685 Nvme0n1 : 2.00 17057.50 66.63 0.00 0.00 0.00 0.00 0.00 00:14:32.685 =================================================================================================================== 00:14:32.685 Total : 17057.50 66.63 0.00 0.00 0.00 0.00 0.00 00:14:32.685 00:14:32.944 true 00:14:32.944 14:51:32 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:32.944 14:51:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:33.202 14:51:33 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:33.202 14:51:33 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:33.202 14:51:33 -- target/nvmf_lvs_grow.sh@65 -- # wait 200876 00:14:33.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.771 Nvme0n1 : 3.00 17259.67 67.42 0.00 0.00 0.00 0.00 0.00 00:14:33.771 =================================================================================================================== 00:14:33.772 Total : 17259.67 67.42 0.00 0.00 0.00 0.00 0.00 00:14:33.772 00:14:34.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.709 Nvme0n1 : 4.00 17409.25 68.00 0.00 0.00 0.00 0.00 0.00 00:14:34.709 =================================================================================================================== 00:14:34.709 Total : 17409.25 68.00 0.00 0.00 0.00 0.00 0.00 00:14:34.709 00:14:36.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.098 Nvme0n1 : 5.00 17394.20 67.95 0.00 0.00 0.00 0.00 0.00 00:14:36.098 =================================================================================================================== 00:14:36.098 Total : 17394.20 67.95 0.00 0.00 0.00 0.00 0.00 00:14:36.098 00:14:37.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.038 Nvme0n1 : 6.00 17535.33 68.50 0.00 0.00 0.00 0.00 0.00 00:14:37.038 =================================================================================================================== 00:14:37.038 Total : 17535.33 68.50 0.00 0.00 0.00 0.00 0.00 00:14:37.038 00:14:37.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.978 Nvme0n1 : 7.00 17604.29 68.77 0.00 0.00 0.00 0.00 0.00 00:14:37.978 =================================================================================================================== 00:14:37.978 Total : 17604.29 68.77 0.00 0.00 0.00 0.00 0.00 00:14:37.978 00:14:38.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.922 Nvme0n1 : 8.00 17656.38 68.97 0.00 0.00 0.00 0.00 0.00 00:14:38.922 =================================================================================================================== 00:14:38.922 Total : 17656.38 68.97 0.00 0.00 0.00 0.00 0.00 00:14:38.922 00:14:39.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.864 Nvme0n1 : 9.00 17731.11 69.26 0.00 0.00 0.00 0.00 0.00 00:14:39.864 =================================================================================================================== 00:14:39.864 Total : 17731.11 69.26 0.00 0.00 0.00 0.00 0.00 00:14:39.864 00:14:40.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.804 Nvme0n1 : 10.00 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:14:40.804 =================================================================================================================== 00:14:40.804 Total : 17754.00 69.35 0.00 0.00 0.00 0.00 0.00 00:14:40.804 00:14:40.804 00:14:40.804 Latency(us) 00:14:40.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.804 Nvme0n1 : 10.01 17755.41 69.36 0.00 0.00 7202.58 5315.70 15922.82 00:14:40.804 =================================================================================================================== 00:14:40.804 Total : 17755.41 69.36 0.00 0.00 7202.58 5315.70 15922.82 00:14:40.804 0 00:14:40.804 14:51:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 200732 00:14:40.804 14:51:40 -- common/autotest_common.sh@936 -- # '[' -z 200732 ']' 00:14:40.805 14:51:40 -- common/autotest_common.sh@940 -- # kill -0 200732 00:14:40.805 14:51:40 -- common/autotest_common.sh@941 -- # uname 00:14:40.805 14:51:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.805 14:51:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 200732 00:14:40.805 14:51:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:40.805 14:51:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:40.805 14:51:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 200732' 00:14:40.805 killing process with pid 200732 00:14:40.805 14:51:40 -- common/autotest_common.sh@955 -- # kill 200732 00:14:40.805 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.805 00:14:40.805 Latency(us) 00:14:40.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.805 =================================================================================================================== 00:14:40.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.805 14:51:40 -- common/autotest_common.sh@960 -- # wait 200732 00:14:42.190 14:51:41 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.190 14:51:42 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:42.190 14:51:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.450 14:51:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.450 14:51:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:42.450 14:51:42 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.711 [2024-04-26 14:51:42.650493] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:42.711 14:51:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:42.711 14:51:42 -- common/autotest_common.sh@638 -- # local es=0 00:14:42.711 14:51:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:42.711 14:51:42 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:42.711 14:51:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:42.711 14:51:42 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:42.711 14:51:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:42.711 14:51:42 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:42.711 14:51:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:42.711 14:51:42 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:42.711 14:51:42 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.711 14:51:42 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:42.972 request: 00:14:42.972 { 00:14:42.972 "uuid": "fb30ea23-89a9-4624-a928-0d1527dedabb", 00:14:42.972 "method": "bdev_lvol_get_lvstores", 00:14:42.972 "req_id": 1 00:14:42.972 } 00:14:42.972 Got JSON-RPC error response 00:14:42.972 response: 00:14:42.972 { 00:14:42.972 "code": -19, 00:14:42.972 "message": "No such device" 00:14:42.972 } 00:14:42.972 14:51:42 -- common/autotest_common.sh@641 -- # es=1 00:14:42.972 14:51:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:42.972 14:51:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:42.972 14:51:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:42.972 14:51:42 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.231 aio_bdev 00:14:43.231 14:51:43 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b07dfa20-608d-44d0-80c7-45d0f7a2522b 00:14:43.231 14:51:43 -- common/autotest_common.sh@885 -- # local bdev_name=b07dfa20-608d-44d0-80c7-45d0f7a2522b 00:14:43.231 14:51:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:43.231 14:51:43 -- common/autotest_common.sh@887 -- # local i 00:14:43.231 14:51:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:43.231 14:51:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:43.231 14:51:43 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.489 14:51:43 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b07dfa20-608d-44d0-80c7-45d0f7a2522b -t 2000 00:14:43.747 [ 00:14:43.747 { 00:14:43.747 "name": "b07dfa20-608d-44d0-80c7-45d0f7a2522b", 00:14:43.747 "aliases": [ 00:14:43.747 "lvs/lvol" 00:14:43.747 ], 00:14:43.747 "product_name": "Logical Volume", 00:14:43.747 "block_size": 4096, 00:14:43.747 "num_blocks": 38912, 00:14:43.747 "uuid": "b07dfa20-608d-44d0-80c7-45d0f7a2522b", 00:14:43.747 "assigned_rate_limits": { 00:14:43.747 "rw_ios_per_sec": 0, 00:14:43.747 "rw_mbytes_per_sec": 0, 00:14:43.747 "r_mbytes_per_sec": 0, 00:14:43.747 "w_mbytes_per_sec": 0 00:14:43.747 }, 00:14:43.747 "claimed": false, 00:14:43.747 "zoned": false, 00:14:43.747 "supported_io_types": { 00:14:43.747 "read": true, 00:14:43.747 "write": true, 00:14:43.747 "unmap": true, 00:14:43.747 "write_zeroes": true, 00:14:43.747 "flush": false, 00:14:43.747 "reset": true, 00:14:43.747 "compare": false, 00:14:43.747 "compare_and_write": false, 00:14:43.747 "abort": false, 00:14:43.747 "nvme_admin": false, 00:14:43.747 "nvme_io": false 00:14:43.747 }, 00:14:43.747 "driver_specific": { 00:14:43.747 "lvol": { 00:14:43.747 "lvol_store_uuid": "fb30ea23-89a9-4624-a928-0d1527dedabb", 00:14:43.747 "base_bdev": "aio_bdev", 00:14:43.747 "thin_provision": false, 00:14:43.747 "snapshot": false, 00:14:43.747 "clone": false, 00:14:43.747 "esnap_clone": false 00:14:43.747 } 00:14:43.747 } 00:14:43.747 } 00:14:43.747 ] 00:14:43.747 14:51:43 -- common/autotest_common.sh@893 -- # return 0 00:14:43.747 14:51:43 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:43.747 14:51:43 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.006 14:51:43 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.006 14:51:43 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:44.006 14:51:43 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.267 14:51:44 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.267 14:51:44 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b07dfa20-608d-44d0-80c7-45d0f7a2522b 00:14:44.528 14:51:44 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb30ea23-89a9-4624-a928-0d1527dedabb 00:14:44.787 14:51:44 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.046 14:51:44 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.046 00:14:45.046 real 0m18.498s 00:14:45.046 user 0m18.661s 00:14:45.046 sys 0m1.379s 00:14:45.046 14:51:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.046 14:51:44 -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 ************************************ 00:14:45.046 END TEST lvs_grow_clean 00:14:45.046 ************************************ 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:45.046 14:51:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.046 14:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.046 14:51:45 -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 ************************************ 00:14:45.046 START TEST lvs_grow_dirty 00:14:45.046 ************************************ 00:14:45.046 14:51:45 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.046 14:51:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.306 14:51:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.306 14:51:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:45.306 14:51:45 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:45.873 14:51:45 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 lvol 150 00:14:46.133 14:51:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8f08adda-48bc-4d23-a420-66fba793af1b 00:14:46.133 14:51:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.133 14:51:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:46.392 [2024-04-26 14:51:46.387018] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:46.392 [2024-04-26 14:51:46.387157] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:46.392 true 00:14:46.392 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:14:46.392 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:46.651 14:51:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:46.651 14:51:46 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:46.911 14:51:46 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f08adda-48bc-4d23-a420-66fba793af1b 00:14:47.173 14:51:47 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:47.432 14:51:47 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:47.691 14:51:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=202919 00:14:47.691 14:51:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:47.691 14:51:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:47.691 14:51:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 202919 /var/tmp/bdevperf.sock 00:14:47.691 14:51:47 -- common/autotest_common.sh@817 -- # '[' -z 202919 ']' 00:14:47.691 14:51:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.691 14:51:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:47.691 14:51:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.691 14:51:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:47.691 14:51:47 -- common/autotest_common.sh@10 -- # set +x 00:14:47.691 [2024-04-26 14:51:47.746342] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:47.691 [2024-04-26 14:51:47.746497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202919 ] 00:14:47.951 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.951 [2024-04-26 14:51:47.882894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.211 [2024-04-26 14:51:48.120423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.777 14:51:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.777 14:51:48 -- common/autotest_common.sh@850 -- # return 0 00:14:48.777 14:51:48 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:49.035 Nvme0n1 00:14:49.035 14:51:49 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:49.294 [ 00:14:49.294 { 00:14:49.294 "name": "Nvme0n1", 00:14:49.294 "aliases": [ 00:14:49.294 "8f08adda-48bc-4d23-a420-66fba793af1b" 00:14:49.294 ], 00:14:49.294 "product_name": "NVMe disk", 00:14:49.294 "block_size": 4096, 00:14:49.294 "num_blocks": 38912, 00:14:49.294 "uuid": "8f08adda-48bc-4d23-a420-66fba793af1b", 00:14:49.294 "assigned_rate_limits": { 00:14:49.294 "rw_ios_per_sec": 0, 00:14:49.294 "rw_mbytes_per_sec": 0, 00:14:49.294 "r_mbytes_per_sec": 0, 00:14:49.294 "w_mbytes_per_sec": 0 00:14:49.294 }, 00:14:49.294 "claimed": false, 00:14:49.294 "zoned": false, 00:14:49.294 "supported_io_types": { 00:14:49.294 "read": true, 00:14:49.294 "write": true, 00:14:49.294 "unmap": true, 00:14:49.294 "write_zeroes": true, 00:14:49.294 "flush": true, 00:14:49.294 "reset": true, 00:14:49.294 "compare": true, 00:14:49.294 "compare_and_write": true, 00:14:49.294 "abort": true, 00:14:49.294 "nvme_admin": true, 00:14:49.294 "nvme_io": true 00:14:49.294 }, 00:14:49.294 "memory_domains": [ 00:14:49.294 { 00:14:49.294 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:49.294 "dma_device_type": 0 00:14:49.294 } 00:14:49.294 ], 00:14:49.294 "driver_specific": { 00:14:49.294 "nvme": [ 00:14:49.294 { 00:14:49.294 "trid": { 00:14:49.294 "trtype": "RDMA", 00:14:49.294 "adrfam": "IPv4", 00:14:49.294 "traddr": "192.168.100.8", 00:14:49.294 "trsvcid": "4420", 00:14:49.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:49.294 }, 00:14:49.294 "ctrlr_data": { 00:14:49.294 "cntlid": 1, 00:14:49.294 "vendor_id": "0x8086", 00:14:49.294 "model_number": "SPDK bdev Controller", 00:14:49.294 "serial_number": "SPDK0", 00:14:49.294 "firmware_revision": "24.05", 00:14:49.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.294 "oacs": { 00:14:49.294 "security": 0, 00:14:49.294 "format": 0, 00:14:49.294 "firmware": 0, 00:14:49.294 "ns_manage": 0 00:14:49.294 }, 00:14:49.294 "multi_ctrlr": true, 00:14:49.294 "ana_reporting": false 00:14:49.294 }, 00:14:49.294 "vs": { 00:14:49.294 "nvme_version": "1.3" 00:14:49.294 }, 00:14:49.295 "ns_data": { 00:14:49.295 "id": 1, 00:14:49.295 "can_share": true 00:14:49.295 } 00:14:49.295 } 00:14:49.295 ], 00:14:49.295 "mp_policy": "active_passive" 00:14:49.295 } 00:14:49.295 } 00:14:49.295 ] 00:14:49.295 14:51:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=203178 00:14:49.295 14:51:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:49.295 14:51:49 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.555 Running I/O for 10 seconds... 00:14:50.497 Latency(us) 00:14:50.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.497 Nvme0n1 : 1.00 16777.00 65.54 0.00 0.00 0.00 0.00 0.00 00:14:50.497 =================================================================================================================== 00:14:50.497 Total : 16777.00 65.54 0.00 0.00 0.00 0.00 0.00 00:14:50.497 00:14:51.435 14:51:51 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:14:51.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.435 Nvme0n1 : 2.00 17107.50 66.83 0.00 0.00 0.00 0.00 0.00 00:14:51.435 =================================================================================================================== 00:14:51.435 Total : 17107.50 66.83 0.00 0.00 0.00 0.00 0.00 00:14:51.435 00:14:51.693 true 00:14:51.693 14:51:51 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:14:51.693 14:51:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:51.953 14:51:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:51.954 14:51:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:51.954 14:51:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 203178 00:14:52.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.524 Nvme0n1 : 3.00 17069.33 66.68 0.00 0.00 0.00 0.00 0.00 00:14:52.524 =================================================================================================================== 00:14:52.524 Total : 17069.33 66.68 0.00 0.00 0.00 0.00 0.00 00:14:52.524 00:14:53.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.462 Nvme0n1 : 4.00 17287.00 67.53 0.00 0.00 0.00 0.00 0.00 00:14:53.462 =================================================================================================================== 00:14:53.462 Total : 17287.00 67.53 0.00 0.00 0.00 0.00 0.00 00:14:53.462 00:14:54.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.398 Nvme0n1 : 5.00 17256.20 67.41 0.00 0.00 0.00 0.00 0.00 00:14:54.398 =================================================================================================================== 00:14:54.398 Total : 17256.20 67.41 0.00 0.00 0.00 0.00 0.00 00:14:54.398 00:14:55.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.778 Nvme0n1 : 6.00 17273.00 67.47 0.00 0.00 0.00 0.00 0.00 00:14:55.778 =================================================================================================================== 00:14:55.778 Total : 17273.00 67.47 0.00 0.00 0.00 0.00 0.00 00:14:55.778 00:14:56.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.714 Nvme0n1 : 7.00 17382.14 67.90 0.00 0.00 0.00 0.00 0.00 00:14:56.714 =================================================================================================================== 00:14:56.714 Total : 17382.14 67.90 0.00 0.00 0.00 0.00 0.00 00:14:56.714 00:14:57.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.654 Nvme0n1 : 8.00 17425.12 68.07 0.00 0.00 0.00 0.00 0.00 00:14:57.654 =================================================================================================================== 00:14:57.654 Total : 17425.12 68.07 0.00 0.00 0.00 0.00 0.00 00:14:57.654 00:14:58.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.590 Nvme0n1 : 9.00 17451.89 68.17 0.00 0.00 0.00 0.00 0.00 00:14:58.590 =================================================================================================================== 00:14:58.590 Total : 17451.89 68.17 0.00 0.00 0.00 0.00 0.00 00:14:58.590 00:14:59.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.524 Nvme0n1 : 10.00 17456.90 68.19 0.00 0.00 0.00 0.00 0.00 00:14:59.524 =================================================================================================================== 00:14:59.524 Total : 17456.90 68.19 0.00 0.00 0.00 0.00 0.00 00:14:59.524 00:14:59.524 00:14:59.524 Latency(us) 00:14:59.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.524 Nvme0n1 : 10.01 17457.34 68.19 0.00 0.00 7325.15 5485.61 22719.15 00:14:59.524 =================================================================================================================== 00:14:59.524 Total : 17457.34 68.19 0.00 0.00 7325.15 5485.61 22719.15 00:14:59.524 0 00:14:59.524 14:51:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 202919 00:14:59.524 14:51:59 -- common/autotest_common.sh@936 -- # '[' -z 202919 ']' 00:14:59.525 14:51:59 -- common/autotest_common.sh@940 -- # kill -0 202919 00:14:59.525 14:51:59 -- common/autotest_common.sh@941 -- # uname 00:14:59.525 14:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.525 14:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 202919 00:14:59.525 14:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.525 14:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.525 14:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 202919' 00:14:59.525 killing process with pid 202919 00:14:59.525 14:51:59 -- common/autotest_common.sh@955 -- # kill 202919 00:14:59.525 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.525 00:14:59.525 Latency(us) 00:14:59.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.525 =================================================================================================================== 00:14:59.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.525 14:51:59 -- common/autotest_common.sh@960 -- # wait 202919 00:15:00.903 14:52:00 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:00.903 14:52:00 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:00.903 14:52:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 200154 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@74 -- # wait 200154 00:15:01.163 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 200154 Killed "${NVMF_APP[@]}" "$@" 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:01.163 14:52:01 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:01.163 14:52:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:01.163 14:52:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:01.163 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 14:52:01 -- nvmf/common.sh@470 -- # nvmfpid=204520 00:15:01.163 14:52:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.163 14:52:01 -- nvmf/common.sh@471 -- # waitforlisten 204520 00:15:01.163 14:52:01 -- common/autotest_common.sh@817 -- # '[' -z 204520 ']' 00:15:01.163 14:52:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.163 14:52:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.163 14:52:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.163 14:52:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.163 14:52:01 -- common/autotest_common.sh@10 -- # set +x 00:15:01.422 [2024-04-26 14:52:01.253566] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:01.422 [2024-04-26 14:52:01.253691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.422 [2024-04-26 14:52:01.386943] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.682 [2024-04-26 14:52:01.634877] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.682 [2024-04-26 14:52:01.634953] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.682 [2024-04-26 14:52:01.634978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.682 [2024-04-26 14:52:01.635001] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.682 [2024-04-26 14:52:01.635020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.682 [2024-04-26 14:52:01.635073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.249 14:52:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.249 14:52:02 -- common/autotest_common.sh@850 -- # return 0 00:15:02.249 14:52:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:02.249 14:52:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:02.249 14:52:02 -- common/autotest_common.sh@10 -- # set +x 00:15:02.249 14:52:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.249 14:52:02 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.507 [2024-04-26 14:52:02.431151] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:02.507 [2024-04-26 14:52:02.431400] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:02.507 [2024-04-26 14:52:02.431491] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:02.507 14:52:02 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:02.507 14:52:02 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 8f08adda-48bc-4d23-a420-66fba793af1b 00:15:02.507 14:52:02 -- common/autotest_common.sh@885 -- # local bdev_name=8f08adda-48bc-4d23-a420-66fba793af1b 00:15:02.507 14:52:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:02.507 14:52:02 -- common/autotest_common.sh@887 -- # local i 00:15:02.507 14:52:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:02.507 14:52:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:02.507 14:52:02 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:02.764 14:52:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8f08adda-48bc-4d23-a420-66fba793af1b -t 2000 00:15:03.024 [ 00:15:03.024 { 00:15:03.024 "name": "8f08adda-48bc-4d23-a420-66fba793af1b", 00:15:03.024 "aliases": [ 00:15:03.024 "lvs/lvol" 00:15:03.024 ], 00:15:03.025 "product_name": "Logical Volume", 00:15:03.025 "block_size": 4096, 00:15:03.025 "num_blocks": 38912, 00:15:03.025 "uuid": "8f08adda-48bc-4d23-a420-66fba793af1b", 00:15:03.025 "assigned_rate_limits": { 00:15:03.025 "rw_ios_per_sec": 0, 00:15:03.025 "rw_mbytes_per_sec": 0, 00:15:03.025 "r_mbytes_per_sec": 0, 00:15:03.025 "w_mbytes_per_sec": 0 00:15:03.025 }, 00:15:03.025 "claimed": false, 00:15:03.025 "zoned": false, 00:15:03.025 "supported_io_types": { 00:15:03.025 "read": true, 00:15:03.025 "write": true, 00:15:03.025 "unmap": true, 00:15:03.025 "write_zeroes": true, 00:15:03.025 "flush": false, 00:15:03.025 "reset": true, 00:15:03.025 "compare": false, 00:15:03.025 "compare_and_write": false, 00:15:03.025 "abort": false, 00:15:03.025 "nvme_admin": false, 00:15:03.025 "nvme_io": false 00:15:03.025 }, 00:15:03.025 "driver_specific": { 00:15:03.025 "lvol": { 00:15:03.025 "lvol_store_uuid": "d4e9d707-4a2a-41eb-94a0-3ded4aafdc38", 00:15:03.025 "base_bdev": "aio_bdev", 00:15:03.025 "thin_provision": false, 00:15:03.025 "snapshot": false, 00:15:03.025 "clone": false, 00:15:03.025 "esnap_clone": false 00:15:03.025 } 00:15:03.025 } 00:15:03.025 } 00:15:03.025 ] 00:15:03.025 14:52:02 -- common/autotest_common.sh@893 -- # return 0 00:15:03.025 14:52:02 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:03.025 14:52:02 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:03.284 14:52:03 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:03.284 14:52:03 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:03.284 14:52:03 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:03.542 14:52:03 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:03.542 14:52:03 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.802 [2024-04-26 14:52:03.739941] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:03.802 14:52:03 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:03.802 14:52:03 -- common/autotest_common.sh@638 -- # local es=0 00:15:03.802 14:52:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:03.802 14:52:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:03.802 14:52:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:03.802 14:52:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:03.802 14:52:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:03.802 14:52:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:03.802 14:52:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:03.802 14:52:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:03.802 14:52:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:15:03.802 14:52:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:04.063 request: 00:15:04.063 { 00:15:04.063 "uuid": "d4e9d707-4a2a-41eb-94a0-3ded4aafdc38", 00:15:04.063 "method": "bdev_lvol_get_lvstores", 00:15:04.063 "req_id": 1 00:15:04.063 } 00:15:04.063 Got JSON-RPC error response 00:15:04.063 response: 00:15:04.063 { 00:15:04.063 "code": -19, 00:15:04.063 "message": "No such device" 00:15:04.063 } 00:15:04.063 14:52:04 -- common/autotest_common.sh@641 -- # es=1 00:15:04.063 14:52:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:04.063 14:52:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:04.063 14:52:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:04.063 14:52:04 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:04.322 aio_bdev 00:15:04.322 14:52:04 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8f08adda-48bc-4d23-a420-66fba793af1b 00:15:04.322 14:52:04 -- common/autotest_common.sh@885 -- # local bdev_name=8f08adda-48bc-4d23-a420-66fba793af1b 00:15:04.322 14:52:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:04.322 14:52:04 -- common/autotest_common.sh@887 -- # local i 00:15:04.322 14:52:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:04.322 14:52:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:04.322 14:52:04 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.582 14:52:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8f08adda-48bc-4d23-a420-66fba793af1b -t 2000 00:15:04.841 [ 00:15:04.841 { 00:15:04.841 "name": "8f08adda-48bc-4d23-a420-66fba793af1b", 00:15:04.841 "aliases": [ 00:15:04.841 "lvs/lvol" 00:15:04.841 ], 00:15:04.842 "product_name": "Logical Volume", 00:15:04.842 "block_size": 4096, 00:15:04.842 "num_blocks": 38912, 00:15:04.842 "uuid": "8f08adda-48bc-4d23-a420-66fba793af1b", 00:15:04.842 "assigned_rate_limits": { 00:15:04.842 "rw_ios_per_sec": 0, 00:15:04.842 "rw_mbytes_per_sec": 0, 00:15:04.842 "r_mbytes_per_sec": 0, 00:15:04.842 "w_mbytes_per_sec": 0 00:15:04.842 }, 00:15:04.842 "claimed": false, 00:15:04.842 "zoned": false, 00:15:04.842 "supported_io_types": { 00:15:04.842 "read": true, 00:15:04.842 "write": true, 00:15:04.842 "unmap": true, 00:15:04.842 "write_zeroes": true, 00:15:04.842 "flush": false, 00:15:04.842 "reset": true, 00:15:04.842 "compare": false, 00:15:04.842 "compare_and_write": false, 00:15:04.842 "abort": false, 00:15:04.842 "nvme_admin": false, 00:15:04.842 "nvme_io": false 00:15:04.842 }, 00:15:04.842 "driver_specific": { 00:15:04.842 "lvol": { 00:15:04.842 "lvol_store_uuid": "d4e9d707-4a2a-41eb-94a0-3ded4aafdc38", 00:15:04.842 "base_bdev": "aio_bdev", 00:15:04.842 "thin_provision": false, 00:15:04.842 "snapshot": false, 00:15:04.842 "clone": false, 00:15:04.842 "esnap_clone": false 00:15:04.842 } 00:15:04.842 } 00:15:04.842 } 00:15:04.842 ] 00:15:04.842 14:52:04 -- common/autotest_common.sh@893 -- # return 0 00:15:04.842 14:52:04 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:04.842 14:52:04 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:05.099 14:52:05 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:05.099 14:52:05 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:05.099 14:52:05 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:05.356 14:52:05 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:05.356 14:52:05 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f08adda-48bc-4d23-a420-66fba793af1b 00:15:05.616 14:52:05 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4e9d707-4a2a-41eb-94a0-3ded4aafdc38 00:15:05.875 14:52:05 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:06.136 14:52:06 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:06.136 00:15:06.136 real 0m20.928s 00:15:06.136 user 0m53.801s 00:15:06.136 sys 0m4.053s 00:15:06.136 14:52:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.136 14:52:06 -- common/autotest_common.sh@10 -- # set +x 00:15:06.136 ************************************ 00:15:06.136 END TEST lvs_grow_dirty 00:15:06.136 ************************************ 00:15:06.136 14:52:06 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:06.136 14:52:06 -- common/autotest_common.sh@794 -- # type=--id 00:15:06.136 14:52:06 -- common/autotest_common.sh@795 -- # id=0 00:15:06.136 14:52:06 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:06.136 14:52:06 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.136 14:52:06 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:06.136 14:52:06 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:06.136 14:52:06 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:06.136 14:52:06 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.136 nvmf_trace.0 00:15:06.136 14:52:06 -- common/autotest_common.sh@809 -- # return 0 00:15:06.136 14:52:06 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:06.136 14:52:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:06.136 14:52:06 -- nvmf/common.sh@117 -- # sync 00:15:06.136 14:52:06 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:06.136 14:52:06 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:06.136 14:52:06 -- nvmf/common.sh@120 -- # set +e 00:15:06.136 14:52:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.136 14:52:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:06.136 rmmod nvme_rdma 00:15:06.136 rmmod nvme_fabrics 00:15:06.136 14:52:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.136 14:52:06 -- nvmf/common.sh@124 -- # set -e 00:15:06.136 14:52:06 -- nvmf/common.sh@125 -- # return 0 00:15:06.136 14:52:06 -- nvmf/common.sh@478 -- # '[' -n 204520 ']' 00:15:06.136 14:52:06 -- nvmf/common.sh@479 -- # killprocess 204520 00:15:06.136 14:52:06 -- common/autotest_common.sh@936 -- # '[' -z 204520 ']' 00:15:06.136 14:52:06 -- common/autotest_common.sh@940 -- # kill -0 204520 00:15:06.136 14:52:06 -- common/autotest_common.sh@941 -- # uname 00:15:06.136 14:52:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.136 14:52:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 204520 00:15:06.136 14:52:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.136 14:52:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.136 14:52:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 204520' 00:15:06.136 killing process with pid 204520 00:15:06.136 14:52:06 -- common/autotest_common.sh@955 -- # kill 204520 00:15:06.136 14:52:06 -- common/autotest_common.sh@960 -- # wait 204520 00:15:07.517 14:52:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:07.517 14:52:07 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:07.517 00:15:07.517 real 0m44.502s 00:15:07.517 user 1m19.838s 00:15:07.517 sys 0m7.458s 00:15:07.517 14:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.517 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:15:07.517 ************************************ 00:15:07.517 END TEST nvmf_lvs_grow 00:15:07.517 ************************************ 00:15:07.517 14:52:07 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:15:07.517 14:52:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.517 14:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.517 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:15:07.517 ************************************ 00:15:07.517 START TEST nvmf_bdev_io_wait 00:15:07.517 ************************************ 00:15:07.517 14:52:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:15:07.517 * Looking for test storage... 00:15:07.517 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:07.517 14:52:07 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.517 14:52:07 -- nvmf/common.sh@7 -- # uname -s 00:15:07.517 14:52:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.517 14:52:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.517 14:52:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.517 14:52:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.517 14:52:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.517 14:52:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.517 14:52:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.517 14:52:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.517 14:52:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.517 14:52:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.517 14:52:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:07.517 14:52:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:07.517 14:52:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.517 14:52:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.517 14:52:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.517 14:52:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.517 14:52:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:07.517 14:52:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.517 14:52:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.517 14:52:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.517 14:52:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.517 14:52:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.517 14:52:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.517 14:52:07 -- paths/export.sh@5 -- # export PATH 00:15:07.517 14:52:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.517 14:52:07 -- nvmf/common.sh@47 -- # : 0 00:15:07.517 14:52:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.517 14:52:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.517 14:52:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.517 14:52:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.517 14:52:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.517 14:52:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.517 14:52:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.517 14:52:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.517 14:52:07 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.517 14:52:07 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.517 14:52:07 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.517 14:52:07 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:07.517 14:52:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.517 14:52:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:07.517 14:52:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:07.517 14:52:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:07.517 14:52:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.517 14:52:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.517 14:52:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.517 14:52:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:07.517 14:52:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:07.517 14:52:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.517 14:52:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.072 14:52:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:10.072 14:52:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.072 14:52:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.072 14:52:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.072 14:52:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.072 14:52:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.072 14:52:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.072 14:52:09 -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.072 14:52:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.072 14:52:09 -- nvmf/common.sh@296 -- # e810=() 00:15:10.072 14:52:09 -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.072 14:52:09 -- nvmf/common.sh@297 -- # x722=() 00:15:10.072 14:52:09 -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.072 14:52:09 -- nvmf/common.sh@298 -- # mlx=() 00:15:10.072 14:52:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.072 14:52:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.072 14:52:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:10.072 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:10.072 14:52:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:10.072 14:52:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:10.072 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:10.072 14:52:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:10.072 14:52:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.072 14:52:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.072 14:52:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:10.072 Found net devices under 0000:09:00.0: mlx_0_0 00:15:10.072 14:52:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.072 14:52:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.072 14:52:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:10.072 Found net devices under 0000:09:00.1: mlx_0_1 00:15:10.072 14:52:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.072 14:52:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:10.072 14:52:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:10.072 14:52:09 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:10.072 14:52:09 -- nvmf/common.sh@58 -- # uname 00:15:10.072 14:52:09 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:10.072 14:52:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:10.072 14:52:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:10.072 14:52:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:10.072 14:52:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:10.072 14:52:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:10.072 14:52:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:10.072 14:52:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:10.072 14:52:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:10.072 14:52:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:10.072 14:52:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:10.072 14:52:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:10.072 14:52:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:10.072 14:52:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:10.072 14:52:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:10.072 14:52:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:10.072 14:52:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:10.072 14:52:09 -- nvmf/common.sh@105 -- # continue 2 00:15:10.072 14:52:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.072 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:10.072 14:52:09 -- nvmf/common.sh@105 -- # continue 2 00:15:10.072 14:52:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:10.072 14:52:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:10.072 14:52:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.072 14:52:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:10.072 14:52:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:10.072 14:52:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:10.072 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:10.072 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:15:10.072 altname enp9s0f0np0 00:15:10.072 inet 192.168.100.8/24 scope global mlx_0_0 00:15:10.072 valid_lft forever preferred_lft forever 00:15:10.072 14:52:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:10.072 14:52:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:10.072 14:52:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.072 14:52:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.072 14:52:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:10.073 14:52:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:10.073 14:52:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:10.073 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:10.073 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:15:10.073 altname enp9s0f1np1 00:15:10.073 inet 192.168.100.9/24 scope global mlx_0_1 00:15:10.073 valid_lft forever preferred_lft forever 00:15:10.073 14:52:09 -- nvmf/common.sh@411 -- # return 0 00:15:10.073 14:52:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:10.073 14:52:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:10.073 14:52:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:10.073 14:52:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:10.073 14:52:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:10.073 14:52:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:10.073 14:52:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:10.073 14:52:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:10.073 14:52:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:10.073 14:52:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:10.073 14:52:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.073 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.073 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:10.073 14:52:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:10.073 14:52:09 -- nvmf/common.sh@105 -- # continue 2 00:15:10.073 14:52:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.073 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.073 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:10.073 14:52:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.073 14:52:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:10.073 14:52:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:10.073 14:52:09 -- nvmf/common.sh@105 -- # continue 2 00:15:10.073 14:52:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:10.073 14:52:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:10.073 14:52:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.073 14:52:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:10.073 14:52:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:10.073 14:52:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.073 14:52:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.073 14:52:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:10.073 192.168.100.9' 00:15:10.073 14:52:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:10.073 192.168.100.9' 00:15:10.073 14:52:09 -- nvmf/common.sh@446 -- # head -n 1 00:15:10.073 14:52:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:10.073 14:52:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:10.073 192.168.100.9' 00:15:10.073 14:52:09 -- nvmf/common.sh@447 -- # tail -n +2 00:15:10.073 14:52:09 -- nvmf/common.sh@447 -- # head -n 1 00:15:10.073 14:52:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:10.073 14:52:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:10.073 14:52:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:10.073 14:52:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:10.073 14:52:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:10.073 14:52:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:10.073 14:52:09 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:10.073 14:52:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:10.073 14:52:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:10.073 14:52:09 -- common/autotest_common.sh@10 -- # set +x 00:15:10.073 14:52:09 -- nvmf/common.sh@470 -- # nvmfpid=207036 00:15:10.073 14:52:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:10.073 14:52:09 -- nvmf/common.sh@471 -- # waitforlisten 207036 00:15:10.073 14:52:09 -- common/autotest_common.sh@817 -- # '[' -z 207036 ']' 00:15:10.073 14:52:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.073 14:52:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.073 14:52:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.073 14:52:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.073 14:52:09 -- common/autotest_common.sh@10 -- # set +x 00:15:10.073 [2024-04-26 14:52:09.733818] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:10.073 [2024-04-26 14:52:09.733958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.073 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.073 [2024-04-26 14:52:09.856390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.073 [2024-04-26 14:52:10.095584] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.073 [2024-04-26 14:52:10.095665] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.073 [2024-04-26 14:52:10.095695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.073 [2024-04-26 14:52:10.095722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.073 [2024-04-26 14:52:10.095742] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.073 [2024-04-26 14:52:10.095880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.073 [2024-04-26 14:52:10.095950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.073 [2024-04-26 14:52:10.096034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.073 [2024-04-26 14:52:10.096041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.651 14:52:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:10.651 14:52:10 -- common/autotest_common.sh@850 -- # return 0 00:15:10.651 14:52:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:10.651 14:52:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.651 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 14:52:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.651 14:52:10 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:10.651 14:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.651 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 14:52:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.651 14:52:10 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:10.651 14:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.651 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.912 14:52:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.912 14:52:10 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:10.912 14:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.912 14:52:10 -- common/autotest_common.sh@10 -- # set +x 00:15:10.912 [2024-04-26 14:52:10.974200] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000283c0/0x7fd90acd6940) succeed. 00:15:10.912 [2024-04-26 14:52:10.985013] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028540/0x7fd90ac91940) succeed. 00:15:11.479 14:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.479 14:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.479 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:15:11.479 Malloc0 00:15:11.479 14:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:11.479 14:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.479 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:15:11.479 14:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.479 14:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.479 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:15:11.479 14:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:11.479 14:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.479 14:52:11 -- common/autotest_common.sh@10 -- # set +x 00:15:11.479 [2024-04-26 14:52:11.391064] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:11.479 14:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=207206 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@30 -- # READ_PID=207208 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # config=() 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # local subsystem config 00:15:11.479 14:52:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:11.479 { 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme$subsystem", 00:15:11.479 "trtype": "$TEST_TRANSPORT", 00:15:11.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "$NVMF_PORT", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.479 "hdgst": ${hdgst:-false}, 00:15:11.479 "ddgst": ${ddgst:-false} 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 } 00:15:11.479 EOF 00:15:11.479 )") 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=207210 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # config=() 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # local subsystem config 00:15:11.479 14:52:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:11.479 { 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme$subsystem", 00:15:11.479 "trtype": "$TEST_TRANSPORT", 00:15:11.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "$NVMF_PORT", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.479 "hdgst": ${hdgst:-false}, 00:15:11.479 "ddgst": ${ddgst:-false} 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 } 00:15:11.479 EOF 00:15:11.479 )") 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=207213 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@35 -- # sync 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # cat 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # config=() 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # local subsystem config 00:15:11.479 14:52:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:11.479 { 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme$subsystem", 00:15:11.479 "trtype": "$TEST_TRANSPORT", 00:15:11.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "$NVMF_PORT", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.479 "hdgst": ${hdgst:-false}, 00:15:11.479 "ddgst": ${ddgst:-false} 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 } 00:15:11.479 EOF 00:15:11.479 )") 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # cat 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # config=() 00:15:11.479 14:52:11 -- nvmf/common.sh@521 -- # local subsystem config 00:15:11.479 14:52:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:11.479 { 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme$subsystem", 00:15:11.479 "trtype": "$TEST_TRANSPORT", 00:15:11.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "$NVMF_PORT", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.479 "hdgst": ${hdgst:-false}, 00:15:11.479 "ddgst": ${ddgst:-false} 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 } 00:15:11.479 EOF 00:15:11.479 )") 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # cat 00:15:11.479 14:52:11 -- target/bdev_io_wait.sh@37 -- # wait 207206 00:15:11.479 14:52:11 -- nvmf/common.sh@543 -- # cat 00:15:11.479 14:52:11 -- nvmf/common.sh@545 -- # jq . 00:15:11.479 14:52:11 -- nvmf/common.sh@545 -- # jq . 00:15:11.479 14:52:11 -- nvmf/common.sh@545 -- # jq . 00:15:11.479 14:52:11 -- nvmf/common.sh@546 -- # IFS=, 00:15:11.479 14:52:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme1", 00:15:11.479 "trtype": "rdma", 00:15:11.479 "traddr": "192.168.100.8", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "4420", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.479 "hdgst": false, 00:15:11.479 "ddgst": false 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 }' 00:15:11.479 14:52:11 -- nvmf/common.sh@546 -- # IFS=, 00:15:11.479 14:52:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme1", 00:15:11.479 "trtype": "rdma", 00:15:11.479 "traddr": "192.168.100.8", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "4420", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.479 "hdgst": false, 00:15:11.479 "ddgst": false 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 }' 00:15:11.479 14:52:11 -- nvmf/common.sh@545 -- # jq . 00:15:11.479 14:52:11 -- nvmf/common.sh@546 -- # IFS=, 00:15:11.479 14:52:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme1", 00:15:11.479 "trtype": "rdma", 00:15:11.479 "traddr": "192.168.100.8", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "4420", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.479 "hdgst": false, 00:15:11.479 "ddgst": false 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 }' 00:15:11.479 14:52:11 -- nvmf/common.sh@546 -- # IFS=, 00:15:11.479 14:52:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:11.479 "params": { 00:15:11.479 "name": "Nvme1", 00:15:11.479 "trtype": "rdma", 00:15:11.479 "traddr": "192.168.100.8", 00:15:11.479 "adrfam": "ipv4", 00:15:11.479 "trsvcid": "4420", 00:15:11.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.479 "hdgst": false, 00:15:11.479 "ddgst": false 00:15:11.479 }, 00:15:11.479 "method": "bdev_nvme_attach_controller" 00:15:11.479 }' 00:15:11.479 [2024-04-26 14:52:11.470741] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:11.479 [2024-04-26 14:52:11.470745] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:11.479 [2024-04-26 14:52:11.470897] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-26 14:52:11.470897] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:11.479 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:11.479 [2024-04-26 14:52:11.474435] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:11.480 [2024-04-26 14:52:11.474435] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:11.480 [2024-04-26 14:52:11.474584] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-26 14:52:11.474585] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:11.480 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:11.480 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.777 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.777 [2024-04-26 14:52:11.714257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.777 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.777 [2024-04-26 14:52:11.813117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.777 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.048 [2024-04-26 14:52:11.891208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.048 [2024-04-26 14:52:11.934868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:12.048 [2024-04-26 14:52:11.970917] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.048 [2024-04-26 14:52:12.034180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:12.048 [2024-04-26 14:52:12.102848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:12.306 [2024-04-26 14:52:12.180324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:12.306 Running I/O for 1 seconds... 00:15:12.565 Running I/O for 1 seconds... 00:15:12.565 Running I/O for 1 seconds... 00:15:12.565 Running I/O for 1 seconds... 00:15:13.501 00:15:13.501 Latency(us) 00:15:13.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.501 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:13.501 Nvme1n1 : 1.01 13050.38 50.98 0.00 0.00 9765.30 7136.14 27379.48 00:15:13.501 =================================================================================================================== 00:15:13.501 Total : 13050.38 50.98 0.00 0.00 9765.30 7136.14 27379.48 00:15:13.501 00:15:13.501 Latency(us) 00:15:13.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.501 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:13.501 Nvme1n1 : 1.01 10055.75 39.28 0.00 0.00 12666.06 8883.77 29709.65 00:15:13.501 =================================================================================================================== 00:15:13.501 Total : 10055.75 39.28 0.00 0.00 12666.06 8883.77 29709.65 00:15:13.501 00:15:13.501 Latency(us) 00:15:13.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.501 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:13.501 Nvme1n1 : 1.00 166184.37 649.16 0.00 0.00 767.35 342.85 2924.85 00:15:13.501 =================================================================================================================== 00:15:13.501 Total : 166184.37 649.16 0.00 0.00 767.35 342.85 2924.85 00:15:13.501 00:15:13.501 Latency(us) 00:15:13.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.501 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:13.501 Nvme1n1 : 1.01 13072.45 51.06 0.00 0.00 9749.98 6407.96 19223.89 00:15:13.501 =================================================================================================================== 00:15:13.501 Total : 13072.45 51.06 0.00 0.00 9749.98 6407.96 19223.89 00:15:14.435 14:52:14 -- target/bdev_io_wait.sh@38 -- # wait 207208 00:15:14.435 14:52:14 -- target/bdev_io_wait.sh@39 -- # wait 207210 00:15:14.435 14:52:14 -- target/bdev_io_wait.sh@40 -- # wait 207213 00:15:14.693 14:52:14 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.693 14:52:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.693 14:52:14 -- common/autotest_common.sh@10 -- # set +x 00:15:14.693 14:52:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.693 14:52:14 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:14.693 14:52:14 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:14.693 14:52:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:14.693 14:52:14 -- nvmf/common.sh@117 -- # sync 00:15:14.693 14:52:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:14.693 14:52:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:14.693 14:52:14 -- nvmf/common.sh@120 -- # set +e 00:15:14.693 14:52:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.693 14:52:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:14.693 rmmod nvme_rdma 00:15:14.693 rmmod nvme_fabrics 00:15:14.693 14:52:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.693 14:52:14 -- nvmf/common.sh@124 -- # set -e 00:15:14.693 14:52:14 -- nvmf/common.sh@125 -- # return 0 00:15:14.693 14:52:14 -- nvmf/common.sh@478 -- # '[' -n 207036 ']' 00:15:14.693 14:52:14 -- nvmf/common.sh@479 -- # killprocess 207036 00:15:14.693 14:52:14 -- common/autotest_common.sh@936 -- # '[' -z 207036 ']' 00:15:14.693 14:52:14 -- common/autotest_common.sh@940 -- # kill -0 207036 00:15:14.693 14:52:14 -- common/autotest_common.sh@941 -- # uname 00:15:14.693 14:52:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.693 14:52:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 207036 00:15:14.693 14:52:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.693 14:52:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.693 14:52:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 207036' 00:15:14.693 killing process with pid 207036 00:15:14.694 14:52:14 -- common/autotest_common.sh@955 -- # kill 207036 00:15:14.694 14:52:14 -- common/autotest_common.sh@960 -- # wait 207036 00:15:15.261 [2024-04-26 14:52:15.189179] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:16.643 14:52:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:16.643 14:52:16 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:16.643 00:15:16.643 real 0m8.911s 00:15:16.643 user 0m33.533s 00:15:16.643 sys 0m3.452s 00:15:16.643 14:52:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.643 14:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:16.643 ************************************ 00:15:16.643 END TEST nvmf_bdev_io_wait 00:15:16.643 ************************************ 00:15:16.643 14:52:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:16.643 14:52:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.643 14:52:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.643 14:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:16.643 ************************************ 00:15:16.643 START TEST nvmf_queue_depth 00:15:16.643 ************************************ 00:15:16.644 14:52:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:16.644 * Looking for test storage... 00:15:16.644 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:16.644 14:52:16 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.644 14:52:16 -- nvmf/common.sh@7 -- # uname -s 00:15:16.644 14:52:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.644 14:52:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.644 14:52:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.644 14:52:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.644 14:52:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.644 14:52:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.644 14:52:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.644 14:52:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.644 14:52:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.644 14:52:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.644 14:52:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:16.644 14:52:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:16.644 14:52:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.644 14:52:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.644 14:52:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.644 14:52:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.644 14:52:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:16.644 14:52:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.644 14:52:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.644 14:52:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.644 14:52:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.644 14:52:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.644 14:52:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.644 14:52:16 -- paths/export.sh@5 -- # export PATH 00:15:16.644 14:52:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.644 14:52:16 -- nvmf/common.sh@47 -- # : 0 00:15:16.644 14:52:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.644 14:52:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.644 14:52:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.644 14:52:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.644 14:52:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.644 14:52:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.644 14:52:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.644 14:52:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.644 14:52:16 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:16.644 14:52:16 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:16.644 14:52:16 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.644 14:52:16 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:16.644 14:52:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:16.644 14:52:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.644 14:52:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:16.644 14:52:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:16.644 14:52:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:16.644 14:52:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.644 14:52:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.644 14:52:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.644 14:52:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:16.644 14:52:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:16.644 14:52:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.644 14:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 14:52:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:18.548 14:52:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.548 14:52:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.548 14:52:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.548 14:52:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.548 14:52:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.548 14:52:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.548 14:52:18 -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.548 14:52:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.548 14:52:18 -- nvmf/common.sh@296 -- # e810=() 00:15:18.548 14:52:18 -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.548 14:52:18 -- nvmf/common.sh@297 -- # x722=() 00:15:18.548 14:52:18 -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.548 14:52:18 -- nvmf/common.sh@298 -- # mlx=() 00:15:18.548 14:52:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.548 14:52:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.548 14:52:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.548 14:52:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:18.548 14:52:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:18.548 14:52:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:18.548 14:52:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:18.548 14:52:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:18.548 14:52:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.548 14:52:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.548 14:52:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:18.548 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:18.548 14:52:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:18.548 14:52:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:18.548 14:52:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:18.549 14:52:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.549 14:52:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:18.549 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:18.549 14:52:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:18.549 14:52:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.549 14:52:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.549 14:52:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.549 14:52:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.549 14:52:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.549 14:52:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:18.549 Found net devices under 0000:09:00.0: mlx_0_0 00:15:18.549 14:52:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.549 14:52:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.549 14:52:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.549 14:52:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.549 14:52:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.549 14:52:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:18.549 Found net devices under 0000:09:00.1: mlx_0_1 00:15:18.549 14:52:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.549 14:52:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:18.549 14:52:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:18.549 14:52:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:18.549 14:52:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:18.549 14:52:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:18.549 14:52:18 -- nvmf/common.sh@58 -- # uname 00:15:18.549 14:52:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:18.549 14:52:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:18.549 14:52:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:18.549 14:52:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:18.549 14:52:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:18.549 14:52:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:18.549 14:52:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:18.549 14:52:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:18.549 14:52:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:18.549 14:52:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:18.549 14:52:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:18.549 14:52:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:18.549 14:52:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:18.549 14:52:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:18.549 14:52:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:18.807 14:52:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:18.807 14:52:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:18.807 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.807 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:18.807 14:52:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:18.807 14:52:18 -- nvmf/common.sh@105 -- # continue 2 00:15:18.807 14:52:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:18.807 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.807 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:18.807 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.807 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:18.807 14:52:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:18.807 14:52:18 -- nvmf/common.sh@105 -- # continue 2 00:15:18.807 14:52:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:18.807 14:52:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:18.807 14:52:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:18.807 14:52:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:18.807 14:52:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:18.807 14:52:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:18.807 14:52:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:18.807 14:52:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:18.807 14:52:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:18.808 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:18.808 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:15:18.808 altname enp9s0f0np0 00:15:18.808 inet 192.168.100.8/24 scope global mlx_0_0 00:15:18.808 valid_lft forever preferred_lft forever 00:15:18.808 14:52:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:18.808 14:52:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:18.808 14:52:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:18.808 14:52:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:18.808 14:52:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:18.808 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:18.808 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:15:18.808 altname enp9s0f1np1 00:15:18.808 inet 192.168.100.9/24 scope global mlx_0_1 00:15:18.808 valid_lft forever preferred_lft forever 00:15:18.808 14:52:18 -- nvmf/common.sh@411 -- # return 0 00:15:18.808 14:52:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:18.808 14:52:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:18.808 14:52:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:18.808 14:52:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:18.808 14:52:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:18.808 14:52:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:18.808 14:52:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:18.808 14:52:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:18.808 14:52:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:18.808 14:52:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:18.808 14:52:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:18.808 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.808 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:18.808 14:52:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:18.808 14:52:18 -- nvmf/common.sh@105 -- # continue 2 00:15:18.808 14:52:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:18.808 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.808 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:18.808 14:52:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:18.808 14:52:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:18.808 14:52:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@105 -- # continue 2 00:15:18.808 14:52:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:18.808 14:52:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:18.808 14:52:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:18.808 14:52:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:18.808 14:52:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:18.808 14:52:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:18.808 14:52:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:18.808 192.168.100.9' 00:15:18.808 14:52:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:18.808 192.168.100.9' 00:15:18.808 14:52:18 -- nvmf/common.sh@446 -- # head -n 1 00:15:18.808 14:52:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:18.808 14:52:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:18.808 192.168.100.9' 00:15:18.808 14:52:18 -- nvmf/common.sh@447 -- # tail -n +2 00:15:18.808 14:52:18 -- nvmf/common.sh@447 -- # head -n 1 00:15:18.808 14:52:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:18.808 14:52:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:18.808 14:52:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:18.808 14:52:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:18.808 14:52:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:18.808 14:52:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:18.808 14:52:18 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:18.808 14:52:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:18.808 14:52:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:18.808 14:52:18 -- common/autotest_common.sh@10 -- # set +x 00:15:18.808 14:52:18 -- nvmf/common.sh@470 -- # nvmfpid=209558 00:15:18.808 14:52:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:18.808 14:52:18 -- nvmf/common.sh@471 -- # waitforlisten 209558 00:15:18.808 14:52:18 -- common/autotest_common.sh@817 -- # '[' -z 209558 ']' 00:15:18.808 14:52:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.808 14:52:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.808 14:52:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.808 14:52:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.808 14:52:18 -- common/autotest_common.sh@10 -- # set +x 00:15:18.808 [2024-04-26 14:52:18.783303] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:18.808 [2024-04-26 14:52:18.783447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.808 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.067 [2024-04-26 14:52:18.911628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.328 [2024-04-26 14:52:19.158590] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.328 [2024-04-26 14:52:19.158672] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.328 [2024-04-26 14:52:19.158704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.328 [2024-04-26 14:52:19.158730] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.328 [2024-04-26 14:52:19.158750] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.328 [2024-04-26 14:52:19.158817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.898 14:52:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.898 14:52:19 -- common/autotest_common.sh@850 -- # return 0 00:15:19.898 14:52:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:19.898 14:52:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:19.898 14:52:19 -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 14:52:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.898 14:52:19 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:19.898 14:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.898 14:52:19 -- common/autotest_common.sh@10 -- # set +x 00:15:19.898 [2024-04-26 14:52:19.840463] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027940/0x7f238d454940) succeed. 00:15:19.898 [2024-04-26 14:52:19.852618] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027ac0/0x7f238d410940) succeed. 00:15:19.898 14:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.898 14:52:19 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.898 14:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.898 14:52:19 -- common/autotest_common.sh@10 -- # set +x 00:15:20.157 Malloc0 00:15:20.157 14:52:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.157 14:52:20 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.157 14:52:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.157 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.157 14:52:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.157 14:52:20 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.157 14:52:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.157 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.157 14:52:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.157 14:52:20 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.157 14:52:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.157 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.157 [2024-04-26 14:52:20.054307] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.157 14:52:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.157 14:52:20 -- target/queue_depth.sh@30 -- # bdevperf_pid=209708 00:15:20.157 14:52:20 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:20.157 14:52:20 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.157 14:52:20 -- target/queue_depth.sh@33 -- # waitforlisten 209708 /var/tmp/bdevperf.sock 00:15:20.157 14:52:20 -- common/autotest_common.sh@817 -- # '[' -z 209708 ']' 00:15:20.157 14:52:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.157 14:52:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.157 14:52:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.157 14:52:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.157 14:52:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.157 [2024-04-26 14:52:20.140910] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:20.157 [2024-04-26 14:52:20.141058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209708 ] 00:15:20.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.417 [2024-04-26 14:52:20.272530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.678 [2024-04-26 14:52:20.503039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.245 14:52:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:21.245 14:52:21 -- common/autotest_common.sh@850 -- # return 0 00:15:21.245 14:52:21 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:21.245 14:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:21.245 14:52:21 -- common/autotest_common.sh@10 -- # set +x 00:15:21.245 NVMe0n1 00:15:21.245 14:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:21.245 14:52:21 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.245 Running I/O for 10 seconds... 00:15:33.481 00:15:33.481 Latency(us) 00:15:33.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.481 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:33.481 Verification LBA range: start 0x0 length 0x4000 00:15:33.481 NVMe0n1 : 10.08 10342.42 40.40 0.00 0.00 98567.94 35729.26 60972.75 00:15:33.481 =================================================================================================================== 00:15:33.481 Total : 10342.42 40.40 0.00 0.00 98567.94 35729.26 60972.75 00:15:33.481 0 00:15:33.481 14:52:31 -- target/queue_depth.sh@39 -- # killprocess 209708 00:15:33.481 14:52:31 -- common/autotest_common.sh@936 -- # '[' -z 209708 ']' 00:15:33.481 14:52:31 -- common/autotest_common.sh@940 -- # kill -0 209708 00:15:33.481 14:52:31 -- common/autotest_common.sh@941 -- # uname 00:15:33.481 14:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.481 14:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 209708 00:15:33.481 14:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.481 14:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.481 14:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 209708' 00:15:33.481 killing process with pid 209708 00:15:33.481 14:52:31 -- common/autotest_common.sh@955 -- # kill 209708 00:15:33.481 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.481 00:15:33.481 Latency(us) 00:15:33.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.481 =================================================================================================================== 00:15:33.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.481 14:52:31 -- common/autotest_common.sh@960 -- # wait 209708 00:15:33.481 14:52:32 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:33.481 14:52:32 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:33.481 14:52:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:33.481 14:52:32 -- nvmf/common.sh@117 -- # sync 00:15:33.481 14:52:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:33.481 14:52:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:33.481 14:52:32 -- nvmf/common.sh@120 -- # set +e 00:15:33.481 14:52:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.481 14:52:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:33.481 rmmod nvme_rdma 00:15:33.481 rmmod nvme_fabrics 00:15:33.481 14:52:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.481 14:52:32 -- nvmf/common.sh@124 -- # set -e 00:15:33.481 14:52:32 -- nvmf/common.sh@125 -- # return 0 00:15:33.481 14:52:32 -- nvmf/common.sh@478 -- # '[' -n 209558 ']' 00:15:33.481 14:52:32 -- nvmf/common.sh@479 -- # killprocess 209558 00:15:33.481 14:52:32 -- common/autotest_common.sh@936 -- # '[' -z 209558 ']' 00:15:33.481 14:52:32 -- common/autotest_common.sh@940 -- # kill -0 209558 00:15:33.481 14:52:32 -- common/autotest_common.sh@941 -- # uname 00:15:33.481 14:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.481 14:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 209558 00:15:33.481 14:52:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:33.481 14:52:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:33.481 14:52:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 209558' 00:15:33.481 killing process with pid 209558 00:15:33.481 14:52:32 -- common/autotest_common.sh@955 -- # kill 209558 00:15:33.481 14:52:32 -- common/autotest_common.sh@960 -- # wait 209558 00:15:33.481 [2024-04-26 14:52:32.826630] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:34.418 14:52:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:34.418 14:52:34 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:34.418 00:15:34.418 real 0m17.765s 00:15:34.418 user 0m29.032s 00:15:34.419 sys 0m2.411s 00:15:34.419 14:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:34.419 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:34.419 ************************************ 00:15:34.419 END TEST nvmf_queue_depth 00:15:34.419 ************************************ 00:15:34.419 14:52:34 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:34.419 14:52:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:34.419 14:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.419 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:34.419 ************************************ 00:15:34.419 START TEST nvmf_multipath 00:15:34.419 ************************************ 00:15:34.419 14:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:34.419 * Looking for test storage... 00:15:34.419 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:34.419 14:52:34 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.419 14:52:34 -- nvmf/common.sh@7 -- # uname -s 00:15:34.419 14:52:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.419 14:52:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.419 14:52:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.419 14:52:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.419 14:52:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.419 14:52:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.419 14:52:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.419 14:52:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.419 14:52:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.419 14:52:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.678 14:52:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.678 14:52:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:34.678 14:52:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.678 14:52:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.678 14:52:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.678 14:52:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.678 14:52:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:34.678 14:52:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.678 14:52:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.678 14:52:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.678 14:52:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.678 14:52:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.678 14:52:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.678 14:52:34 -- paths/export.sh@5 -- # export PATH 00:15:34.678 14:52:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.678 14:52:34 -- nvmf/common.sh@47 -- # : 0 00:15:34.678 14:52:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.678 14:52:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.678 14:52:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.678 14:52:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.678 14:52:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.678 14:52:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.678 14:52:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.678 14:52:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.678 14:52:34 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.678 14:52:34 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.678 14:52:34 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:34.678 14:52:34 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:34.678 14:52:34 -- target/multipath.sh@43 -- # nvmftestinit 00:15:34.678 14:52:34 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:34.678 14:52:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.678 14:52:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:34.678 14:52:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:34.678 14:52:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:34.678 14:52:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.678 14:52:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.678 14:52:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.678 14:52:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:34.678 14:52:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:34.678 14:52:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.678 14:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:36.582 14:52:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:36.582 14:52:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.582 14:52:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.582 14:52:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.582 14:52:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.582 14:52:36 -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.582 14:52:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@296 -- # e810=() 00:15:36.582 14:52:36 -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.582 14:52:36 -- nvmf/common.sh@297 -- # x722=() 00:15:36.582 14:52:36 -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.582 14:52:36 -- nvmf/common.sh@298 -- # mlx=() 00:15:36.582 14:52:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.582 14:52:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.582 14:52:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:36.582 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:36.582 14:52:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.582 14:52:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:36.582 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:36.582 14:52:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.582 14:52:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.582 14:52:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.582 14:52:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:36.582 Found net devices under 0000:09:00.0: mlx_0_0 00:15:36.582 14:52:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.582 14:52:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.582 14:52:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:36.582 Found net devices under 0000:09:00.1: mlx_0_1 00:15:36.582 14:52:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.582 14:52:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:36.582 14:52:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:36.582 14:52:36 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:36.582 14:52:36 -- nvmf/common.sh@58 -- # uname 00:15:36.582 14:52:36 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:36.582 14:52:36 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:36.582 14:52:36 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:36.582 14:52:36 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:36.582 14:52:36 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:36.582 14:52:36 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:36.582 14:52:36 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:36.582 14:52:36 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:36.582 14:52:36 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:36.582 14:52:36 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:36.582 14:52:36 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:36.582 14:52:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:36.582 14:52:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:36.582 14:52:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:36.582 14:52:36 -- nvmf/common.sh@105 -- # continue 2 00:15:36.582 14:52:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:36.582 14:52:36 -- nvmf/common.sh@105 -- # continue 2 00:15:36.582 14:52:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:36.582 14:52:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:36.582 14:52:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.582 14:52:36 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:36.582 14:52:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:36.582 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:36.582 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:15:36.582 altname enp9s0f0np0 00:15:36.582 inet 192.168.100.8/24 scope global mlx_0_0 00:15:36.582 valid_lft forever preferred_lft forever 00:15:36.582 14:52:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:36.582 14:52:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:36.582 14:52:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.582 14:52:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.582 14:52:36 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:36.582 14:52:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:36.582 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:36.582 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:15:36.582 altname enp9s0f1np1 00:15:36.582 inet 192.168.100.9/24 scope global mlx_0_1 00:15:36.582 valid_lft forever preferred_lft forever 00:15:36.582 14:52:36 -- nvmf/common.sh@411 -- # return 0 00:15:36.582 14:52:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.582 14:52:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:36.582 14:52:36 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:36.582 14:52:36 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:36.582 14:52:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:36.582 14:52:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:36.582 14:52:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:36.582 14:52:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:36.582 14:52:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.582 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:36.582 14:52:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:36.583 14:52:36 -- nvmf/common.sh@105 -- # continue 2 00:15:36.583 14:52:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.583 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.583 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:36.583 14:52:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.583 14:52:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:36.583 14:52:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:36.583 14:52:36 -- nvmf/common.sh@105 -- # continue 2 00:15:36.583 14:52:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:36.583 14:52:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:36.583 14:52:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.583 14:52:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:36.583 14:52:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:36.583 14:52:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.583 14:52:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.583 14:52:36 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:36.583 192.168.100.9' 00:15:36.583 14:52:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:36.583 192.168.100.9' 00:15:36.583 14:52:36 -- nvmf/common.sh@446 -- # head -n 1 00:15:36.583 14:52:36 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:36.583 14:52:36 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:36.583 192.168.100.9' 00:15:36.583 14:52:36 -- nvmf/common.sh@447 -- # tail -n +2 00:15:36.583 14:52:36 -- nvmf/common.sh@447 -- # head -n 1 00:15:36.583 14:52:36 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:36.583 14:52:36 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:36.583 14:52:36 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:36.583 14:52:36 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:15:36.583 14:52:36 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:15:36.583 14:52:36 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:15:36.583 run this test only with TCP transport for now 00:15:36.583 14:52:36 -- target/multipath.sh@53 -- # nvmftestfini 00:15:36.583 14:52:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:36.583 14:52:36 -- nvmf/common.sh@117 -- # sync 00:15:36.583 14:52:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@120 -- # set +e 00:15:36.583 14:52:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.583 14:52:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:36.583 rmmod nvme_rdma 00:15:36.583 rmmod nvme_fabrics 00:15:36.583 14:52:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.583 14:52:36 -- nvmf/common.sh@124 -- # set -e 00:15:36.583 14:52:36 -- nvmf/common.sh@125 -- # return 0 00:15:36.583 14:52:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:36.583 14:52:36 -- target/multipath.sh@54 -- # exit 0 00:15:36.583 14:52:36 -- target/multipath.sh@1 -- # nvmftestfini 00:15:36.583 14:52:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:36.583 14:52:36 -- nvmf/common.sh@117 -- # sync 00:15:36.583 14:52:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@120 -- # set +e 00:15:36.583 14:52:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.583 14:52:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:36.583 14:52:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.583 14:52:36 -- nvmf/common.sh@124 -- # set -e 00:15:36.583 14:52:36 -- nvmf/common.sh@125 -- # return 0 00:15:36.583 14:52:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:36.583 14:52:36 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:36.583 00:15:36.583 real 0m2.069s 00:15:36.583 user 0m0.803s 00:15:36.583 sys 0m1.350s 00:15:36.583 14:52:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.583 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:15:36.583 ************************************ 00:15:36.583 END TEST nvmf_multipath 00:15:36.583 ************************************ 00:15:36.583 14:52:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:36.583 14:52:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:36.583 14:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.583 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:15:36.583 ************************************ 00:15:36.583 START TEST nvmf_zcopy 00:15:36.583 ************************************ 00:15:36.583 14:52:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:36.842 * Looking for test storage... 00:15:36.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:36.842 14:52:36 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.842 14:52:36 -- nvmf/common.sh@7 -- # uname -s 00:15:36.842 14:52:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.842 14:52:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.842 14:52:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.842 14:52:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.842 14:52:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.842 14:52:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.842 14:52:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.842 14:52:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.842 14:52:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.842 14:52:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.842 14:52:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:36.842 14:52:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:36.842 14:52:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.842 14:52:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.842 14:52:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.842 14:52:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.842 14:52:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:36.842 14:52:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.842 14:52:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.842 14:52:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.842 14:52:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.842 14:52:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.842 14:52:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.842 14:52:36 -- paths/export.sh@5 -- # export PATH 00:15:36.842 14:52:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.842 14:52:36 -- nvmf/common.sh@47 -- # : 0 00:15:36.842 14:52:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.842 14:52:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.842 14:52:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.843 14:52:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.843 14:52:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.843 14:52:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.843 14:52:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.843 14:52:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.843 14:52:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:36.843 14:52:36 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:36.843 14:52:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.843 14:52:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.843 14:52:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.843 14:52:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.843 14:52:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.843 14:52:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.843 14:52:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.843 14:52:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:36.843 14:52:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:36.843 14:52:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.843 14:52:36 -- common/autotest_common.sh@10 -- # set +x 00:15:38.751 14:52:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.751 14:52:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.751 14:52:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.751 14:52:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.751 14:52:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.751 14:52:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.751 14:52:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.751 14:52:38 -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.751 14:52:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.751 14:52:38 -- nvmf/common.sh@296 -- # e810=() 00:15:38.751 14:52:38 -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.751 14:52:38 -- nvmf/common.sh@297 -- # x722=() 00:15:38.751 14:52:38 -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.751 14:52:38 -- nvmf/common.sh@298 -- # mlx=() 00:15:38.751 14:52:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.751 14:52:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.751 14:52:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.751 14:52:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.751 14:52:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:38.751 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:38.751 14:52:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.751 14:52:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.751 14:52:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:38.751 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:38.751 14:52:38 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.751 14:52:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.751 14:52:38 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.751 14:52:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.751 14:52:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.751 14:52:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.751 14:52:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:38.751 Found net devices under 0000:09:00.0: mlx_0_0 00:15:38.751 14:52:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.751 14:52:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.751 14:52:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.751 14:52:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.751 14:52:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:38.751 Found net devices under 0000:09:00.1: mlx_0_1 00:15:38.751 14:52:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.751 14:52:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:38.751 14:52:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:38.751 14:52:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:38.751 14:52:38 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:38.751 14:52:38 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:38.751 14:52:38 -- nvmf/common.sh@58 -- # uname 00:15:38.751 14:52:38 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:38.751 14:52:38 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:38.751 14:52:38 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:38.751 14:52:38 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:38.751 14:52:38 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:38.751 14:52:38 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:38.752 14:52:38 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:38.752 14:52:38 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:38.752 14:52:38 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:38.752 14:52:38 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:38.752 14:52:38 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:38.752 14:52:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.752 14:52:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:38.752 14:52:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:38.752 14:52:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.752 14:52:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:38.752 14:52:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@105 -- # continue 2 00:15:38.752 14:52:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@105 -- # continue 2 00:15:38.752 14:52:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:38.752 14:52:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.752 14:52:38 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:38.752 14:52:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:38.752 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.752 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:15:38.752 altname enp9s0f0np0 00:15:38.752 inet 192.168.100.8/24 scope global mlx_0_0 00:15:38.752 valid_lft forever preferred_lft forever 00:15:38.752 14:52:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:38.752 14:52:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.752 14:52:38 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:38.752 14:52:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:38.752 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.752 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:15:38.752 altname enp9s0f1np1 00:15:38.752 inet 192.168.100.9/24 scope global mlx_0_1 00:15:38.752 valid_lft forever preferred_lft forever 00:15:38.752 14:52:38 -- nvmf/common.sh@411 -- # return 0 00:15:38.752 14:52:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:38.752 14:52:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:38.752 14:52:38 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:38.752 14:52:38 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:38.752 14:52:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.752 14:52:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:38.752 14:52:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:38.752 14:52:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.752 14:52:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:38.752 14:52:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@105 -- # continue 2 00:15:38.752 14:52:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.752 14:52:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.752 14:52:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@105 -- # continue 2 00:15:38.752 14:52:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:38.752 14:52:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.752 14:52:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:38.752 14:52:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.752 14:52:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.752 14:52:38 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:38.752 192.168.100.9' 00:15:38.752 14:52:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:38.752 192.168.100.9' 00:15:38.752 14:52:38 -- nvmf/common.sh@446 -- # head -n 1 00:15:38.752 14:52:38 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:38.752 14:52:38 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:38.752 192.168.100.9' 00:15:38.752 14:52:38 -- nvmf/common.sh@447 -- # tail -n +2 00:15:38.752 14:52:38 -- nvmf/common.sh@447 -- # head -n 1 00:15:38.752 14:52:38 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:38.752 14:52:38 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:38.752 14:52:38 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:38.752 14:52:38 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:38.752 14:52:38 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:38.752 14:52:38 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:38.752 14:52:38 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:38.752 14:52:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:38.753 14:52:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:38.753 14:52:38 -- common/autotest_common.sh@10 -- # set +x 00:15:38.753 14:52:38 -- nvmf/common.sh@470 -- # nvmfpid=214636 00:15:38.753 14:52:38 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:38.753 14:52:38 -- nvmf/common.sh@471 -- # waitforlisten 214636 00:15:38.753 14:52:38 -- common/autotest_common.sh@817 -- # '[' -z 214636 ']' 00:15:38.753 14:52:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.753 14:52:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.753 14:52:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.753 14:52:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.753 14:52:38 -- common/autotest_common.sh@10 -- # set +x 00:15:38.753 [2024-04-26 14:52:38.682260] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:38.753 [2024-04-26 14:52:38.682401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.753 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.753 [2024-04-26 14:52:38.814587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.012 [2024-04-26 14:52:39.035427] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.012 [2024-04-26 14:52:39.035516] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.012 [2024-04-26 14:52:39.035537] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.012 [2024-04-26 14:52:39.035556] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.012 [2024-04-26 14:52:39.035579] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.012 [2024-04-26 14:52:39.035641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.578 14:52:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:39.578 14:52:39 -- common/autotest_common.sh@850 -- # return 0 00:15:39.578 14:52:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:39.578 14:52:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:39.578 14:52:39 -- common/autotest_common.sh@10 -- # set +x 00:15:39.578 14:52:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.578 14:52:39 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:15:39.578 14:52:39 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:15:39.578 Unsupported transport: rdma 00:15:39.578 14:52:39 -- target/zcopy.sh@17 -- # exit 0 00:15:39.578 14:52:39 -- target/zcopy.sh@1 -- # process_shm --id 0 00:15:39.578 14:52:39 -- common/autotest_common.sh@794 -- # type=--id 00:15:39.578 14:52:39 -- common/autotest_common.sh@795 -- # id=0 00:15:39.578 14:52:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:39.578 14:52:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:39.578 14:52:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:39.578 14:52:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:39.578 14:52:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:39.579 14:52:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:39.579 nvmf_trace.0 00:15:39.579 14:52:39 -- common/autotest_common.sh@809 -- # return 0 00:15:39.579 14:52:39 -- target/zcopy.sh@1 -- # nvmftestfini 00:15:39.579 14:52:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:39.579 14:52:39 -- nvmf/common.sh@117 -- # sync 00:15:39.579 14:52:39 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:39.579 14:52:39 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:39.579 14:52:39 -- nvmf/common.sh@120 -- # set +e 00:15:39.579 14:52:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.579 14:52:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:39.579 rmmod nvme_rdma 00:15:39.579 rmmod nvme_fabrics 00:15:39.838 14:52:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.838 14:52:39 -- nvmf/common.sh@124 -- # set -e 00:15:39.838 14:52:39 -- nvmf/common.sh@125 -- # return 0 00:15:39.838 14:52:39 -- nvmf/common.sh@478 -- # '[' -n 214636 ']' 00:15:39.838 14:52:39 -- nvmf/common.sh@479 -- # killprocess 214636 00:15:39.838 14:52:39 -- common/autotest_common.sh@936 -- # '[' -z 214636 ']' 00:15:39.838 14:52:39 -- common/autotest_common.sh@940 -- # kill -0 214636 00:15:39.838 14:52:39 -- common/autotest_common.sh@941 -- # uname 00:15:39.838 14:52:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.838 14:52:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 214636 00:15:39.838 14:52:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:39.838 14:52:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:39.838 14:52:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 214636' 00:15:39.838 killing process with pid 214636 00:15:39.838 14:52:39 -- common/autotest_common.sh@955 -- # kill 214636 00:15:39.838 14:52:39 -- common/autotest_common.sh@960 -- # wait 214636 00:15:41.218 14:52:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:41.218 14:52:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:41.218 00:15:41.218 real 0m4.348s 00:15:41.218 user 0m3.259s 00:15:41.218 sys 0m1.743s 00:15:41.218 14:52:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.218 14:52:40 -- common/autotest_common.sh@10 -- # set +x 00:15:41.218 ************************************ 00:15:41.218 END TEST nvmf_zcopy 00:15:41.218 ************************************ 00:15:41.218 14:52:41 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:41.218 14:52:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.218 14:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.218 14:52:41 -- common/autotest_common.sh@10 -- # set +x 00:15:41.218 ************************************ 00:15:41.218 START TEST nvmf_nmic 00:15:41.218 ************************************ 00:15:41.218 14:52:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:41.218 * Looking for test storage... 00:15:41.218 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:41.218 14:52:41 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.218 14:52:41 -- nvmf/common.sh@7 -- # uname -s 00:15:41.218 14:52:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.218 14:52:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.218 14:52:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.218 14:52:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.218 14:52:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.218 14:52:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.218 14:52:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.218 14:52:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.218 14:52:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.218 14:52:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.218 14:52:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:15:41.218 14:52:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:15:41.218 14:52:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.218 14:52:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.218 14:52:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.218 14:52:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.218 14:52:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:41.218 14:52:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.218 14:52:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.218 14:52:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.218 14:52:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.218 14:52:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.218 14:52:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.218 14:52:41 -- paths/export.sh@5 -- # export PATH 00:15:41.218 14:52:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.218 14:52:41 -- nvmf/common.sh@47 -- # : 0 00:15:41.218 14:52:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.218 14:52:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.218 14:52:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.218 14:52:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.218 14:52:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.218 14:52:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.218 14:52:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.218 14:52:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.218 14:52:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.218 14:52:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.218 14:52:41 -- target/nmic.sh@14 -- # nvmftestinit 00:15:41.218 14:52:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:41.218 14:52:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.218 14:52:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:41.218 14:52:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:41.218 14:52:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:41.218 14:52:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.218 14:52:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.218 14:52:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.218 14:52:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:41.218 14:52:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:41.218 14:52:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.218 14:52:41 -- common/autotest_common.sh@10 -- # set +x 00:15:43.126 14:52:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:43.126 14:52:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.126 14:52:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.126 14:52:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.126 14:52:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.126 14:52:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.126 14:52:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.126 14:52:43 -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.126 14:52:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.126 14:52:43 -- nvmf/common.sh@296 -- # e810=() 00:15:43.126 14:52:43 -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.126 14:52:43 -- nvmf/common.sh@297 -- # x722=() 00:15:43.126 14:52:43 -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.126 14:52:43 -- nvmf/common.sh@298 -- # mlx=() 00:15:43.126 14:52:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.126 14:52:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.126 14:52:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.126 14:52:43 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:43.126 14:52:43 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:43.126 14:52:43 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:43.126 14:52:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.126 14:52:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.126 14:52:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:43.126 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:43.126 14:52:43 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.126 14:52:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.126 14:52:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:43.126 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:43.126 14:52:43 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:43.126 14:52:43 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.126 14:52:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.126 14:52:43 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.127 14:52:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:43.127 14:52:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.127 14:52:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:43.127 Found net devices under 0000:09:00.0: mlx_0_0 00:15:43.127 14:52:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.127 14:52:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.127 14:52:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:43.127 14:52:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.127 14:52:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:43.127 Found net devices under 0000:09:00.1: mlx_0_1 00:15:43.127 14:52:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.127 14:52:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:43.127 14:52:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:43.127 14:52:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:43.127 14:52:43 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:43.127 14:52:43 -- nvmf/common.sh@58 -- # uname 00:15:43.127 14:52:43 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:43.127 14:52:43 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:43.127 14:52:43 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:43.127 14:52:43 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:43.127 14:52:43 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:43.127 14:52:43 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:43.127 14:52:43 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:43.127 14:52:43 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:43.127 14:52:43 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:43.127 14:52:43 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:43.127 14:52:43 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:43.127 14:52:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.127 14:52:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:43.127 14:52:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:43.127 14:52:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.127 14:52:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:43.127 14:52:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:43.127 14:52:43 -- nvmf/common.sh@105 -- # continue 2 00:15:43.127 14:52:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.127 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:43.127 14:52:43 -- nvmf/common.sh@105 -- # continue 2 00:15:43.127 14:52:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:43.127 14:52:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:43.127 14:52:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:43.127 14:52:43 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:43.127 14:52:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:43.127 14:52:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:43.127 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.127 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:15:43.127 altname enp9s0f0np0 00:15:43.127 inet 192.168.100.8/24 scope global mlx_0_0 00:15:43.127 valid_lft forever preferred_lft forever 00:15:43.127 14:52:43 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:43.127 14:52:43 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:43.127 14:52:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.127 14:52:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.403 14:52:43 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:43.403 14:52:43 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:43.403 14:52:43 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:43.403 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.403 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:15:43.403 altname enp9s0f1np1 00:15:43.403 inet 192.168.100.9/24 scope global mlx_0_1 00:15:43.403 valid_lft forever preferred_lft forever 00:15:43.403 14:52:43 -- nvmf/common.sh@411 -- # return 0 00:15:43.403 14:52:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:43.403 14:52:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:43.403 14:52:43 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:43.403 14:52:43 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:43.403 14:52:43 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:43.403 14:52:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.403 14:52:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:43.403 14:52:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:43.403 14:52:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.403 14:52:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:43.403 14:52:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.403 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.403 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.403 14:52:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:43.403 14:52:43 -- nvmf/common.sh@105 -- # continue 2 00:15:43.403 14:52:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.403 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.403 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.403 14:52:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.403 14:52:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.403 14:52:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:43.403 14:52:43 -- nvmf/common.sh@105 -- # continue 2 00:15:43.403 14:52:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:43.403 14:52:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:43.403 14:52:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.403 14:52:43 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:43.403 14:52:43 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:43.403 14:52:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.403 14:52:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.403 14:52:43 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:43.403 192.168.100.9' 00:15:43.403 14:52:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:43.403 192.168.100.9' 00:15:43.403 14:52:43 -- nvmf/common.sh@446 -- # head -n 1 00:15:43.403 14:52:43 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:43.403 14:52:43 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:43.403 192.168.100.9' 00:15:43.404 14:52:43 -- nvmf/common.sh@447 -- # tail -n +2 00:15:43.404 14:52:43 -- nvmf/common.sh@447 -- # head -n 1 00:15:43.404 14:52:43 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:43.404 14:52:43 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:43.404 14:52:43 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:43.404 14:52:43 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:43.404 14:52:43 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:43.404 14:52:43 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:43.404 14:52:43 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:43.404 14:52:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:43.404 14:52:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:43.404 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:15:43.404 14:52:43 -- nvmf/common.sh@470 -- # nvmfpid=216639 00:15:43.404 14:52:43 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.404 14:52:43 -- nvmf/common.sh@471 -- # waitforlisten 216639 00:15:43.404 14:52:43 -- common/autotest_common.sh@817 -- # '[' -z 216639 ']' 00:15:43.404 14:52:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.404 14:52:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.404 14:52:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.404 14:52:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.404 14:52:43 -- common/autotest_common.sh@10 -- # set +x 00:15:43.404 [2024-04-26 14:52:43.353903] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:43.404 [2024-04-26 14:52:43.354059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.404 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.662 [2024-04-26 14:52:43.487554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.662 [2024-04-26 14:52:43.741325] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.662 [2024-04-26 14:52:43.741411] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.662 [2024-04-26 14:52:43.741451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.662 [2024-04-26 14:52:43.741476] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.662 [2024-04-26 14:52:43.741507] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.662 [2024-04-26 14:52:43.741635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.662 [2024-04-26 14:52:43.741710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.662 [2024-04-26 14:52:43.741794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.662 [2024-04-26 14:52:43.741801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.232 14:52:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.232 14:52:44 -- common/autotest_common.sh@850 -- # return 0 00:15:44.232 14:52:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:44.232 14:52:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:44.232 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 14:52:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.232 14:52:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:44.232 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.232 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.232 [2024-04-26 14:52:44.301154] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7fec737b0940) succeed. 00:15:44.232 [2024-04-26 14:52:44.312257] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7fec73769940) succeed. 00:15:44.804 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.804 14:52:44 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:44.804 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.804 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.804 Malloc0 00:15:44.804 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.804 14:52:44 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.804 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.804 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.804 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.804 14:52:44 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.804 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.804 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.804 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:44.805 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.805 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 [2024-04-26 14:52:44.718601] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:44.805 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:44.805 test case1: single bdev can't be used in multiple subsystems 00:15:44.805 14:52:44 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:44.805 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.805 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:44.805 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.805 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@28 -- # nmic_status=0 00:15:44.805 14:52:44 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:44.805 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.805 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 [2024-04-26 14:52:44.742364] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:44.805 [2024-04-26 14:52:44.742433] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:44.805 [2024-04-26 14:52:44.742456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.805 request: 00:15:44.805 { 00:15:44.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:44.805 "namespace": { 00:15:44.805 "bdev_name": "Malloc0", 00:15:44.805 "no_auto_visible": false 00:15:44.805 }, 00:15:44.805 "method": "nvmf_subsystem_add_ns", 00:15:44.805 "req_id": 1 00:15:44.805 } 00:15:44.805 Got JSON-RPC error response 00:15:44.805 response: 00:15:44.805 { 00:15:44.805 "code": -32602, 00:15:44.805 "message": "Invalid parameters" 00:15:44.805 } 00:15:44.805 14:52:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@29 -- # nmic_status=1 00:15:44.805 14:52:44 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:44.805 14:52:44 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:44.805 Adding namespace failed - expected result. 00:15:44.805 14:52:44 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:44.805 test case2: host connect to nvmf target in multiple paths 00:15:44.805 14:52:44 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:15:44.805 14:52:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.805 14:52:44 -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 [2024-04-26 14:52:44.750584] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:44.805 14:52:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.805 14:52:44 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:48.998 14:52:48 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:15:52.285 14:52:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.285 14:52:51 -- common/autotest_common.sh@1184 -- # local i=0 00:15:52.285 14:52:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.285 14:52:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:52.285 14:52:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:54.190 14:52:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:54.190 14:52:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:54.190 14:52:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.190 14:52:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:54.190 14:52:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.190 14:52:53 -- common/autotest_common.sh@1194 -- # return 0 00:15:54.190 14:52:53 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:54.190 [global] 00:15:54.190 thread=1 00:15:54.190 invalidate=1 00:15:54.190 rw=write 00:15:54.190 time_based=1 00:15:54.190 runtime=1 00:15:54.190 ioengine=libaio 00:15:54.190 direct=1 00:15:54.190 bs=4096 00:15:54.190 iodepth=1 00:15:54.190 norandommap=0 00:15:54.190 numjobs=1 00:15:54.190 00:15:54.190 verify_dump=1 00:15:54.190 verify_backlog=512 00:15:54.190 verify_state_save=0 00:15:54.190 do_verify=1 00:15:54.190 verify=crc32c-intel 00:15:54.190 [job0] 00:15:54.190 filename=/dev/nvme0n1 00:15:54.190 Could not set queue depth (nvme0n1) 00:15:54.190 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.190 fio-3.35 00:15:54.190 Starting 1 thread 00:15:55.570 00:15:55.570 job0: (groupid=0, jobs=1): err= 0: pid=218050: Fri Apr 26 14:52:55 2024 00:15:55.570 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:15:55.570 slat (nsec): min=4705, max=42520, avg=9622.60, stdev=4326.01 00:15:55.570 clat (usec): min=77, max=145, avg=94.66, stdev=11.67 00:15:55.570 lat (usec): min=83, max=169, avg=104.28, stdev=13.65 00:15:55.570 clat percentiles (usec): 00:15:55.570 | 1.00th=[ 80], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:15:55.570 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:15:55.570 | 70.00th=[ 97], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 119], 00:15:55.570 | 99.00th=[ 129], 99.50th=[ 135], 99.90th=[ 141], 99.95th=[ 143], 00:15:55.570 | 99.99th=[ 147] 00:15:55.570 write: IOPS=5010, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1001msec); 0 zone resets 00:15:55.570 slat (nsec): min=5385, max=49137, avg=10563.47, stdev=4850.40 00:15:55.570 clat (usec): min=71, max=190, avg=87.84, stdev=11.22 00:15:55.570 lat (usec): min=77, max=231, avg=98.41, stdev=13.60 00:15:55.570 clat percentiles (usec): 00:15:55.570 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:15:55.570 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 88], 00:15:55.570 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 112], 00:15:55.570 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 133], 99.95th=[ 139], 00:15:55.570 | 99.99th=[ 192] 00:15:55.570 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:15:55.570 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:55.570 lat (usec) : 100=80.49%, 250=19.51% 00:15:55.570 cpu : usr=5.70%, sys=9.20%, ctx=9624, majf=0, minf=2 00:15:55.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.570 issued rwts: total=4608,5016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.570 00:15:55.570 Run status group 0 (all jobs): 00:15:55.570 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:15:55.570 WRITE: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=19.6MiB (20.5MB), run=1001-1001msec 00:15:55.570 00:15:55.570 Disk stats (read/write): 00:15:55.570 nvme0n1: ios=4174/4608, merge=0/0, ticks=394/412, in_queue=806, util=90.98% 00:15:55.570 14:52:55 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:00.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:00.860 14:53:00 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:00.860 14:53:00 -- common/autotest_common.sh@1205 -- # local i=0 00:16:00.860 14:53:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:00.860 14:53:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.860 14:53:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:00.860 14:53:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:00.860 14:53:00 -- common/autotest_common.sh@1217 -- # return 0 00:16:00.860 14:53:00 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:00.860 14:53:00 -- target/nmic.sh@53 -- # nvmftestfini 00:16:00.860 14:53:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:00.860 14:53:00 -- nvmf/common.sh@117 -- # sync 00:16:00.860 14:53:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:00.860 14:53:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:00.860 14:53:00 -- nvmf/common.sh@120 -- # set +e 00:16:00.860 14:53:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.860 14:53:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:00.860 rmmod nvme_rdma 00:16:00.860 rmmod nvme_fabrics 00:16:00.860 14:53:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.860 14:53:00 -- nvmf/common.sh@124 -- # set -e 00:16:00.860 14:53:00 -- nvmf/common.sh@125 -- # return 0 00:16:00.860 14:53:00 -- nvmf/common.sh@478 -- # '[' -n 216639 ']' 00:16:00.860 14:53:00 -- nvmf/common.sh@479 -- # killprocess 216639 00:16:00.860 14:53:00 -- common/autotest_common.sh@936 -- # '[' -z 216639 ']' 00:16:00.860 14:53:00 -- common/autotest_common.sh@940 -- # kill -0 216639 00:16:00.860 14:53:00 -- common/autotest_common.sh@941 -- # uname 00:16:00.860 14:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.860 14:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 216639 00:16:00.860 14:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:00.860 14:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:00.860 14:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 216639' 00:16:00.860 killing process with pid 216639 00:16:00.860 14:53:00 -- common/autotest_common.sh@955 -- # kill 216639 00:16:00.860 14:53:00 -- common/autotest_common.sh@960 -- # wait 216639 00:16:00.860 [2024-04-26 14:53:00.762684] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:02.238 14:53:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:02.238 14:53:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:02.238 00:16:02.238 real 0m21.054s 00:16:02.238 user 1m12.338s 00:16:02.238 sys 0m2.746s 00:16:02.238 14:53:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:02.238 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:16:02.238 ************************************ 00:16:02.238 END TEST nvmf_nmic 00:16:02.238 ************************************ 00:16:02.238 14:53:02 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:02.238 14:53:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:02.238 14:53:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.238 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:16:02.238 ************************************ 00:16:02.238 START TEST nvmf_fio_target 00:16:02.238 ************************************ 00:16:02.238 14:53:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:02.497 * Looking for test storage... 00:16:02.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:02.497 14:53:02 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.497 14:53:02 -- nvmf/common.sh@7 -- # uname -s 00:16:02.497 14:53:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.497 14:53:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.497 14:53:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.497 14:53:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.497 14:53:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.497 14:53:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.497 14:53:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.497 14:53:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.497 14:53:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.497 14:53:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.497 14:53:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:02.497 14:53:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:16:02.497 14:53:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.497 14:53:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.497 14:53:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.497 14:53:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.497 14:53:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:02.497 14:53:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.497 14:53:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.497 14:53:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.497 14:53:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.497 14:53:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.497 14:53:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.497 14:53:02 -- paths/export.sh@5 -- # export PATH 00:16:02.498 14:53:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.498 14:53:02 -- nvmf/common.sh@47 -- # : 0 00:16:02.498 14:53:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.498 14:53:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.498 14:53:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.498 14:53:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.498 14:53:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.498 14:53:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.498 14:53:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.498 14:53:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.498 14:53:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.498 14:53:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.498 14:53:02 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:02.498 14:53:02 -- target/fio.sh@16 -- # nvmftestinit 00:16:02.498 14:53:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:02.498 14:53:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.498 14:53:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:02.498 14:53:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:02.498 14:53:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:02.498 14:53:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.498 14:53:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.498 14:53:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.498 14:53:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:02.498 14:53:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:02.498 14:53:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.498 14:53:02 -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 14:53:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:04.405 14:53:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.405 14:53:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.405 14:53:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.405 14:53:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.405 14:53:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.405 14:53:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.405 14:53:04 -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.405 14:53:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.405 14:53:04 -- nvmf/common.sh@296 -- # e810=() 00:16:04.405 14:53:04 -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.405 14:53:04 -- nvmf/common.sh@297 -- # x722=() 00:16:04.405 14:53:04 -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.405 14:53:04 -- nvmf/common.sh@298 -- # mlx=() 00:16:04.405 14:53:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.405 14:53:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.405 14:53:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.405 14:53:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:53:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:04.405 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:04.405 14:53:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:04.405 14:53:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:53:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:04.405 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:04.405 14:53:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:04.405 14:53:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.405 14:53:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:53:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.405 14:53:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.405 14:53:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.405 14:53:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:04.405 Found net devices under 0000:09:00.0: mlx_0_0 00:16:04.405 14:53:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.405 14:53:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.405 14:53:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:04.405 14:53:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.405 14:53:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:04.405 Found net devices under 0000:09:00.1: mlx_0_1 00:16:04.405 14:53:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.405 14:53:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:04.405 14:53:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:04.405 14:53:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:04.405 14:53:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:04.405 14:53:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:04.405 14:53:04 -- nvmf/common.sh@58 -- # uname 00:16:04.405 14:53:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:04.405 14:53:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:04.405 14:53:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:04.405 14:53:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:04.405 14:53:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:04.405 14:53:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:04.406 14:53:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:04.406 14:53:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:04.406 14:53:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:04.406 14:53:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:04.406 14:53:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:04.406 14:53:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:04.406 14:53:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:04.406 14:53:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:04.406 14:53:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:04.406 14:53:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:04.406 14:53:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@105 -- # continue 2 00:16:04.406 14:53:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@105 -- # continue 2 00:16:04.406 14:53:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:04.406 14:53:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:04.406 14:53:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:04.406 14:53:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:04.406 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:04.406 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:16:04.406 altname enp9s0f0np0 00:16:04.406 inet 192.168.100.8/24 scope global mlx_0_0 00:16:04.406 valid_lft forever preferred_lft forever 00:16:04.406 14:53:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:04.406 14:53:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:04.406 14:53:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:04.406 14:53:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:04.406 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:04.406 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:16:04.406 altname enp9s0f1np1 00:16:04.406 inet 192.168.100.9/24 scope global mlx_0_1 00:16:04.406 valid_lft forever preferred_lft forever 00:16:04.406 14:53:04 -- nvmf/common.sh@411 -- # return 0 00:16:04.406 14:53:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:04.406 14:53:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:04.406 14:53:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:04.406 14:53:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:04.406 14:53:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:04.406 14:53:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:04.406 14:53:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:04.406 14:53:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:04.406 14:53:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:04.406 14:53:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@105 -- # continue 2 00:16:04.406 14:53:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:04.406 14:53:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:04.406 14:53:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@105 -- # continue 2 00:16:04.406 14:53:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:04.406 14:53:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:04.406 14:53:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:04.406 14:53:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:04.406 14:53:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:04.406 14:53:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:04.406 192.168.100.9' 00:16:04.406 14:53:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:04.406 192.168.100.9' 00:16:04.406 14:53:04 -- nvmf/common.sh@446 -- # head -n 1 00:16:04.406 14:53:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:04.406 14:53:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:04.406 192.168.100.9' 00:16:04.406 14:53:04 -- nvmf/common.sh@447 -- # tail -n +2 00:16:04.406 14:53:04 -- nvmf/common.sh@447 -- # head -n 1 00:16:04.406 14:53:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:04.406 14:53:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:04.406 14:53:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:04.406 14:53:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:04.406 14:53:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:04.406 14:53:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:04.406 14:53:04 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:04.406 14:53:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:04.406 14:53:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:04.406 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 14:53:04 -- nvmf/common.sh@470 -- # nvmfpid=220667 00:16:04.406 14:53:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:04.406 14:53:04 -- nvmf/common.sh@471 -- # waitforlisten 220667 00:16:04.406 14:53:04 -- common/autotest_common.sh@817 -- # '[' -z 220667 ']' 00:16:04.406 14:53:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.406 14:53:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.406 14:53:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.406 14:53:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.406 14:53:04 -- common/autotest_common.sh@10 -- # set +x 00:16:04.406 [2024-04-26 14:53:04.447687] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:04.406 [2024-04-26 14:53:04.447816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.667 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.667 [2024-04-26 14:53:04.581571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.928 [2024-04-26 14:53:04.839002] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.928 [2024-04-26 14:53:04.839066] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.928 [2024-04-26 14:53:04.839094] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.928 [2024-04-26 14:53:04.839119] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.928 [2024-04-26 14:53:04.839148] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.928 [2024-04-26 14:53:04.839277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.928 [2024-04-26 14:53:04.839315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.928 [2024-04-26 14:53:04.839375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.928 [2024-04-26 14:53:04.839380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.495 14:53:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.495 14:53:05 -- common/autotest_common.sh@850 -- # return 0 00:16:05.495 14:53:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:05.495 14:53:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:05.495 14:53:05 -- common/autotest_common.sh@10 -- # set +x 00:16:05.495 14:53:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.495 14:53:05 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:05.754 [2024-04-26 14:53:05.643519] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f7b2e36c940) succeed. 00:16:05.754 [2024-04-26 14:53:05.654631] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f7b2e328940) succeed. 00:16:06.013 14:53:05 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.271 14:53:06 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:06.271 14:53:06 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.837 14:53:06 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:06.837 14:53:06 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.119 14:53:06 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:07.119 14:53:06 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.376 14:53:07 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:07.376 14:53:07 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:07.634 14:53:07 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.892 14:53:07 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:07.892 14:53:07 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.150 14:53:08 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:08.150 14:53:08 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.407 14:53:08 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:08.408 14:53:08 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:08.666 14:53:08 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.924 14:53:08 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:08.924 14:53:08 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.182 14:53:09 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:09.182 14:53:09 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:09.440 14:53:09 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:09.698 [2024-04-26 14:53:09.631492] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:09.698 14:53:09 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:09.964 14:53:09 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:10.229 14:53:10 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:14.421 14:53:13 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:14.421 14:53:13 -- common/autotest_common.sh@1184 -- # local i=0 00:16:14.421 14:53:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.421 14:53:13 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:14.421 14:53:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:14.421 14:53:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:15.800 14:53:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:15.800 14:53:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:15.800 14:53:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.800 14:53:15 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:15.800 14:53:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.800 14:53:15 -- common/autotest_common.sh@1194 -- # return 0 00:16:15.800 14:53:15 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:15.800 [global] 00:16:15.800 thread=1 00:16:15.800 invalidate=1 00:16:15.800 rw=write 00:16:15.800 time_based=1 00:16:15.800 runtime=1 00:16:15.800 ioengine=libaio 00:16:15.800 direct=1 00:16:15.800 bs=4096 00:16:15.800 iodepth=1 00:16:15.800 norandommap=0 00:16:15.800 numjobs=1 00:16:15.800 00:16:15.800 verify_dump=1 00:16:15.800 verify_backlog=512 00:16:15.800 verify_state_save=0 00:16:15.800 do_verify=1 00:16:15.800 verify=crc32c-intel 00:16:15.800 [job0] 00:16:15.800 filename=/dev/nvme0n1 00:16:15.800 [job1] 00:16:15.800 filename=/dev/nvme0n2 00:16:15.800 [job2] 00:16:15.800 filename=/dev/nvme0n3 00:16:15.800 [job3] 00:16:15.800 filename=/dev/nvme0n4 00:16:15.800 Could not set queue depth (nvme0n1) 00:16:15.800 Could not set queue depth (nvme0n2) 00:16:15.800 Could not set queue depth (nvme0n3) 00:16:15.800 Could not set queue depth (nvme0n4) 00:16:15.800 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.800 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.800 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.800 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.800 fio-3.35 00:16:15.800 Starting 4 threads 00:16:17.180 00:16:17.180 job0: (groupid=0, jobs=1): err= 0: pid=222159: Fri Apr 26 14:53:17 2024 00:16:17.180 read: IOPS=2407, BW=9630KiB/s (9861kB/s)(9640KiB/1001msec) 00:16:17.180 slat (nsec): min=5580, max=42128, avg=12367.53, stdev=5406.18 00:16:17.180 clat (usec): min=111, max=442, avg=189.59, stdev=36.42 00:16:17.180 lat (usec): min=123, max=461, avg=201.95, stdev=38.08 00:16:17.180 clat percentiles (usec): 00:16:17.180 | 1.00th=[ 120], 5.00th=[ 139], 10.00th=[ 163], 20.00th=[ 172], 00:16:17.180 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:16:17.180 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 237], 95.00th=[ 260], 00:16:17.180 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 412], 99.95th=[ 416], 00:16:17.180 | 99.99th=[ 441] 00:16:17.180 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:17.180 slat (nsec): min=6022, max=61119, avg=13509.26, stdev=5223.43 00:16:17.180 clat (usec): min=103, max=624, avg=180.40, stdev=38.50 00:16:17.180 lat (usec): min=115, max=643, avg=193.91, stdev=40.31 00:16:17.180 clat percentiles (usec): 00:16:17.180 | 1.00th=[ 111], 5.00th=[ 127], 10.00th=[ 151], 20.00th=[ 159], 00:16:17.180 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:16:17.180 | 70.00th=[ 184], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 241], 00:16:17.180 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 445], 99.95th=[ 457], 00:16:17.180 | 99.99th=[ 627] 00:16:17.180 bw ( KiB/s): min=12288, max=12288, per=24.85%, avg=12288.00, stdev= 0.00, samples=1 00:16:17.180 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:17.180 lat (usec) : 250=95.09%, 500=4.89%, 750=0.02% 00:16:17.180 cpu : usr=3.40%, sys=6.50%, ctx=4970, majf=0, minf=1 00:16:17.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.180 issued rwts: total=2410,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.180 job1: (groupid=0, jobs=1): err= 0: pid=222160: Fri Apr 26 14:53:17 2024 00:16:17.180 read: IOPS=3464, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec) 00:16:17.180 slat (nsec): min=5863, max=50808, avg=11353.16, stdev=4049.89 00:16:17.180 clat (usec): min=88, max=454, avg=122.62, stdev=40.53 00:16:17.180 lat (usec): min=95, max=471, avg=133.97, stdev=41.48 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:16:17.181 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:16:17.181 | 70.00th=[ 114], 80.00th=[ 125], 90.00th=[ 172], 95.00th=[ 221], 00:16:17.181 | 99.00th=[ 289], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 449], 00:16:17.181 | 99.99th=[ 453] 00:16:17.181 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:17.181 slat (nsec): min=5878, max=48598, avg=12782.68, stdev=5238.80 00:16:17.181 clat (usec): min=81, max=602, avg=130.45, stdev=53.75 00:16:17.181 lat (usec): min=90, max=618, avg=143.23, stdev=56.23 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:16:17.181 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 109], 00:16:17.181 | 70.00th=[ 149], 80.00th=[ 169], 90.00th=[ 219], 95.00th=[ 233], 00:16:17.181 | 99.00th=[ 310], 99.50th=[ 343], 99.90th=[ 416], 99.95th=[ 457], 00:16:17.181 | 99.99th=[ 603] 00:16:17.181 bw ( KiB/s): min=16448, max=16448, per=33.26%, avg=16448.00, stdev= 0.00, samples=1 00:16:17.181 iops : min= 4112, max= 4112, avg=4112.00, stdev= 0.00, samples=1 00:16:17.181 lat (usec) : 100=28.66%, 250=69.16%, 500=2.17%, 750=0.01% 00:16:17.181 cpu : usr=3.40%, sys=9.60%, ctx=7052, majf=0, minf=1 00:16:17.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 issued rwts: total=3468,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.181 job2: (groupid=0, jobs=1): err= 0: pid=222161: Fri Apr 26 14:53:17 2024 00:16:17.181 read: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec) 00:16:17.181 slat (nsec): min=5847, max=36877, avg=11242.29, stdev=3439.81 00:16:17.181 clat (usec): min=102, max=238, avg=132.83, stdev=16.80 00:16:17.181 lat (usec): min=109, max=253, avg=144.08, stdev=18.02 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 122], 00:16:17.181 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:16:17.181 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 169], 00:16:17.181 | 99.00th=[ 190], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 239], 00:16:17.181 | 99.99th=[ 239] 00:16:17.181 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:17.181 slat (nsec): min=6173, max=46206, avg=12594.06, stdev=4245.75 00:16:17.181 clat (usec): min=92, max=255, avg=120.18, stdev=10.56 00:16:17.181 lat (usec): min=100, max=291, avg=132.78, stdev=12.51 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:16:17.181 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:16:17.181 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 139], 00:16:17.181 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 180], 00:16:17.181 | 99.99th=[ 255] 00:16:17.181 bw ( KiB/s): min=15632, max=15632, per=31.61%, avg=15632.00, stdev= 0.00, samples=1 00:16:17.181 iops : min= 3908, max= 3908, avg=3908.00, stdev= 0.00, samples=1 00:16:17.181 lat (usec) : 100=0.49%, 250=99.49%, 500=0.01% 00:16:17.181 cpu : usr=4.60%, sys=8.30%, ctx=7074, majf=0, minf=1 00:16:17.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 issued rwts: total=3489,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.181 job3: (groupid=0, jobs=1): err= 0: pid=222162: Fri Apr 26 14:53:17 2024 00:16:17.181 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:16:17.181 slat (nsec): min=5748, max=54852, avg=12134.81, stdev=5258.22 00:16:17.181 clat (usec): min=101, max=598, avg=179.40, stdev=43.25 00:16:17.181 lat (usec): min=107, max=612, avg=191.54, stdev=45.63 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 145], 00:16:17.181 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:16:17.181 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 227], 95.00th=[ 247], 00:16:17.181 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 379], 00:16:17.181 | 99.99th=[ 603] 00:16:17.181 write: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1002msec); 0 zone resets 00:16:17.181 slat (nsec): min=6326, max=49860, avg=13935.68, stdev=5721.43 00:16:17.181 clat (usec): min=96, max=398, avg=171.08, stdev=38.52 00:16:17.181 lat (usec): min=103, max=438, avg=185.01, stdev=41.57 00:16:17.181 clat percentiles (usec): 00:16:17.181 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 159], 00:16:17.181 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:16:17.181 | 70.00th=[ 178], 80.00th=[ 196], 90.00th=[ 219], 95.00th=[ 229], 00:16:17.181 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 383], 00:16:17.181 | 99.99th=[ 400] 00:16:17.181 bw ( KiB/s): min= 8992, max=12288, per=21.52%, avg=10640.00, stdev=2330.62, samples=2 00:16:17.181 iops : min= 2248, max= 3072, avg=2660.00, stdev=582.66, samples=2 00:16:17.181 lat (usec) : 100=0.54%, 250=96.03%, 500=3.41%, 750=0.02% 00:16:17.181 cpu : usr=3.40%, sys=7.09%, ctx=5221, majf=0, minf=2 00:16:17.181 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.181 issued rwts: total=2560,2660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.181 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.181 00:16:17.181 Run status group 0 (all jobs): 00:16:17.181 READ: bw=46.5MiB/s (48.8MB/s), 9630KiB/s-13.6MiB/s (9861kB/s-14.3MB/s), io=46.6MiB (48.9MB), run=1001-1002msec 00:16:17.181 WRITE: bw=48.3MiB/s (50.6MB/s), 9.99MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=48.4MiB (50.7MB), run=1001-1002msec 00:16:17.181 00:16:17.181 Disk stats (read/write): 00:16:17.181 nvme0n1: ios=2098/2170, merge=0/0, ticks=401/380, in_queue=781, util=86.57% 00:16:17.181 nvme0n2: ios=2997/3072, merge=0/0, ticks=351/377, in_queue=728, util=86.67% 00:16:17.181 nvme0n3: ios=2907/3072, merge=0/0, ticks=365/357, in_queue=722, util=89.12% 00:16:17.181 nvme0n4: ios=2048/2301, merge=0/0, ticks=371/382, in_queue=753, util=89.68% 00:16:17.181 14:53:17 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:17.181 [global] 00:16:17.181 thread=1 00:16:17.181 invalidate=1 00:16:17.181 rw=randwrite 00:16:17.181 time_based=1 00:16:17.181 runtime=1 00:16:17.181 ioengine=libaio 00:16:17.181 direct=1 00:16:17.181 bs=4096 00:16:17.181 iodepth=1 00:16:17.181 norandommap=0 00:16:17.181 numjobs=1 00:16:17.181 00:16:17.181 verify_dump=1 00:16:17.181 verify_backlog=512 00:16:17.181 verify_state_save=0 00:16:17.181 do_verify=1 00:16:17.181 verify=crc32c-intel 00:16:17.181 [job0] 00:16:17.181 filename=/dev/nvme0n1 00:16:17.181 [job1] 00:16:17.181 filename=/dev/nvme0n2 00:16:17.181 [job2] 00:16:17.181 filename=/dev/nvme0n3 00:16:17.181 [job3] 00:16:17.181 filename=/dev/nvme0n4 00:16:17.181 Could not set queue depth (nvme0n1) 00:16:17.181 Could not set queue depth (nvme0n2) 00:16:17.181 Could not set queue depth (nvme0n3) 00:16:17.181 Could not set queue depth (nvme0n4) 00:16:17.440 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.440 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.440 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.440 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.440 fio-3.35 00:16:17.440 Starting 4 threads 00:16:18.818 00:16:18.818 job0: (groupid=0, jobs=1): err= 0: pid=222402: Fri Apr 26 14:53:18 2024 00:16:18.818 read: IOPS=1902, BW=7608KiB/s (7791kB/s)(7616KiB/1001msec) 00:16:18.818 slat (nsec): min=5118, max=56251, avg=10716.71, stdev=6054.27 00:16:18.818 clat (usec): min=116, max=477, avg=252.43, stdev=59.12 00:16:18.818 lat (usec): min=124, max=491, avg=263.15, stdev=60.78 00:16:18.818 clat percentiles (usec): 00:16:18.818 | 1.00th=[ 135], 5.00th=[ 167], 10.00th=[ 196], 20.00th=[ 223], 00:16:18.818 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 247], 00:16:18.818 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 351], 95.00th=[ 388], 00:16:18.818 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 474], 99.95th=[ 478], 00:16:18.818 | 99.99th=[ 478] 00:16:18.818 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:18.818 slat (nsec): min=5696, max=53371, avg=10879.89, stdev=5838.31 00:16:18.818 clat (usec): min=107, max=486, avg=227.41, stdev=25.97 00:16:18.818 lat (usec): min=122, max=501, avg=238.29, stdev=26.79 00:16:18.818 clat percentiles (usec): 00:16:18.818 | 1.00th=[ 135], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 219], 00:16:18.818 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:16:18.818 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:16:18.818 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 412], 99.95th=[ 424], 00:16:18.818 | 99.99th=[ 486] 00:16:18.818 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.818 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.818 lat (usec) : 250=81.58%, 500=18.42% 00:16:18.818 cpu : usr=1.60%, sys=5.00%, ctx=3952, majf=0, minf=1 00:16:18.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.819 job1: (groupid=0, jobs=1): err= 0: pid=222416: Fri Apr 26 14:53:18 2024 00:16:18.819 read: IOPS=3837, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec) 00:16:18.819 slat (nsec): min=4729, max=30480, avg=10932.81, stdev=3595.40 00:16:18.819 clat (usec): min=88, max=221, avg=116.33, stdev= 9.47 00:16:18.819 lat (usec): min=95, max=233, avg=127.26, stdev=11.11 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 111], 00:16:18.819 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 117], 00:16:18.819 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 133], 00:16:18.819 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 180], 99.95th=[ 198], 00:16:18.819 | 99.99th=[ 223] 00:16:18.819 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:16:18.819 slat (nsec): min=4981, max=59290, avg=10838.74, stdev=4099.56 00:16:18.819 clat (usec): min=83, max=191, avg=108.59, stdev= 9.17 00:16:18.819 lat (usec): min=89, max=204, avg=119.43, stdev=11.38 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 99], 20.00th=[ 103], 00:16:18.819 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 111], 00:16:18.819 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 125], 00:16:18.819 | 99.00th=[ 135], 99.50th=[ 137], 99.90th=[ 149], 99.95th=[ 167], 00:16:18.819 | 99.99th=[ 192] 00:16:18.819 bw ( KiB/s): min=16496, max=16496, per=40.31%, avg=16496.00, stdev= 0.00, samples=1 00:16:18.819 iops : min= 4124, max= 4124, avg=4124.00, stdev= 0.00, samples=1 00:16:18.819 lat (usec) : 100=8.01%, 250=91.99% 00:16:18.819 cpu : usr=5.60%, sys=11.80%, ctx=7937, majf=0, minf=1 00:16:18.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 issued rwts: total=3841,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.819 job2: (groupid=0, jobs=1): err= 0: pid=222453: Fri Apr 26 14:53:18 2024 00:16:18.819 read: IOPS=1907, BW=7628KiB/s (7811kB/s)(7636KiB/1001msec) 00:16:18.819 slat (nsec): min=5160, max=57260, avg=10986.07, stdev=5916.02 00:16:18.819 clat (usec): min=129, max=483, avg=252.31, stdev=54.26 00:16:18.819 lat (usec): min=137, max=496, avg=263.29, stdev=55.69 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 145], 5.00th=[ 178], 10.00th=[ 200], 20.00th=[ 225], 00:16:18.819 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:16:18.819 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 330], 95.00th=[ 379], 00:16:18.819 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 486], 00:16:18.819 | 99.99th=[ 486] 00:16:18.819 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:18.819 slat (nsec): min=5814, max=51586, avg=10757.15, stdev=5619.72 00:16:18.819 clat (usec): min=121, max=463, avg=226.77, stdev=24.65 00:16:18.819 lat (usec): min=140, max=514, avg=237.53, stdev=25.42 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 143], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 219], 00:16:18.819 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:16:18.819 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 251], 00:16:18.819 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 437], 99.95th=[ 441], 00:16:18.819 | 99.99th=[ 465] 00:16:18.819 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.819 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.819 lat (usec) : 250=81.45%, 500=18.55% 00:16:18.819 cpu : usr=2.60%, sys=4.00%, ctx=3957, majf=0, minf=1 00:16:18.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 issued rwts: total=1909,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.819 job3: (groupid=0, jobs=1): err= 0: pid=222459: Fri Apr 26 14:53:18 2024 00:16:18.819 read: IOPS=1919, BW=7676KiB/s (7861kB/s)(7684KiB/1001msec) 00:16:18.819 slat (nsec): min=5135, max=59970, avg=11099.05, stdev=6230.82 00:16:18.819 clat (usec): min=108, max=448, avg=248.00, stdev=55.66 00:16:18.819 lat (usec): min=116, max=463, avg=259.09, stdev=57.25 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 116], 5.00th=[ 151], 10.00th=[ 194], 20.00th=[ 221], 00:16:18.819 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:16:18.819 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 334], 95.00th=[ 379], 00:16:18.819 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 445], 99.95th=[ 449], 00:16:18.819 | 99.99th=[ 449] 00:16:18.819 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:18.819 slat (nsec): min=5881, max=54951, avg=11413.10, stdev=6110.33 00:16:18.819 clat (usec): min=119, max=458, avg=228.23, stdev=27.70 00:16:18.819 lat (usec): min=138, max=486, avg=239.64, stdev=28.69 00:16:18.819 clat percentiles (usec): 00:16:18.819 | 1.00th=[ 133], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 219], 00:16:18.819 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:16:18.819 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 262], 00:16:18.819 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 416], 99.95th=[ 420], 00:16:18.819 | 99.99th=[ 457] 00:16:18.819 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:16:18.819 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:18.819 lat (usec) : 250=80.57%, 500=19.43% 00:16:18.819 cpu : usr=1.60%, sys=5.20%, ctx=3969, majf=0, minf=2 00:16:18.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.819 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.819 00:16:18.819 Run status group 0 (all jobs): 00:16:18.819 READ: bw=37.4MiB/s (39.2MB/s), 7608KiB/s-15.0MiB/s (7791kB/s-15.7MB/s), io=37.4MiB (39.2MB), run=1001-1001msec 00:16:18.819 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-16.0MiB/s (8380kB/s-16.8MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:16:18.819 00:16:18.819 Disk stats (read/write): 00:16:18.819 nvme0n1: ios=1586/1842, merge=0/0, ticks=401/400, in_queue=801, util=86.57% 00:16:18.819 nvme0n2: ios=3243/3584, merge=0/0, ticks=354/357, in_queue=711, util=86.51% 00:16:18.819 nvme0n3: ios=1536/1862, merge=0/0, ticks=376/414, in_queue=790, util=88.95% 00:16:18.819 nvme0n4: ios=1536/1840, merge=0/0, ticks=387/411, in_queue=798, util=89.60% 00:16:18.819 14:53:18 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:18.819 [global] 00:16:18.819 thread=1 00:16:18.819 invalidate=1 00:16:18.819 rw=write 00:16:18.819 time_based=1 00:16:18.819 runtime=1 00:16:18.819 ioengine=libaio 00:16:18.819 direct=1 00:16:18.819 bs=4096 00:16:18.819 iodepth=128 00:16:18.820 norandommap=0 00:16:18.820 numjobs=1 00:16:18.820 00:16:18.820 verify_dump=1 00:16:18.820 verify_backlog=512 00:16:18.820 verify_state_save=0 00:16:18.820 do_verify=1 00:16:18.820 verify=crc32c-intel 00:16:18.820 [job0] 00:16:18.820 filename=/dev/nvme0n1 00:16:18.820 [job1] 00:16:18.820 filename=/dev/nvme0n2 00:16:18.820 [job2] 00:16:18.820 filename=/dev/nvme0n3 00:16:18.820 [job3] 00:16:18.820 filename=/dev/nvme0n4 00:16:18.820 Could not set queue depth (nvme0n1) 00:16:18.820 Could not set queue depth (nvme0n2) 00:16:18.820 Could not set queue depth (nvme0n3) 00:16:18.820 Could not set queue depth (nvme0n4) 00:16:18.820 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.820 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.820 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.820 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.820 fio-3.35 00:16:18.820 Starting 4 threads 00:16:20.202 00:16:20.202 job0: (groupid=0, jobs=1): err= 0: pid=222729: Fri Apr 26 14:53:19 2024 00:16:20.202 read: IOPS=7172, BW=28.0MiB/s (29.4MB/s)(28.1MiB/1003msec) 00:16:20.202 slat (usec): min=2, max=1604, avg=67.31, stdev=247.67 00:16:20.202 clat (usec): min=1679, max=10469, avg=8915.48, stdev=713.55 00:16:20.202 lat (usec): min=2161, max=10490, avg=8982.79, stdev=675.34 00:16:20.202 clat percentiles (usec): 00:16:20.202 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8029], 20.00th=[ 8291], 00:16:20.202 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:16:20.202 | 70.00th=[ 9372], 80.00th=[ 9372], 90.00th=[ 9503], 95.00th=[ 9634], 00:16:20.202 | 99.00th=[ 9765], 99.50th=[ 9765], 99.90th=[ 9896], 99.95th=[10028], 00:16:20.202 | 99.99th=[10421] 00:16:20.202 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:16:20.202 slat (usec): min=3, max=2249, avg=61.23, stdev=221.96 00:16:20.202 clat (usec): min=3008, max=9843, avg=8174.77, stdev=721.23 00:16:20.202 lat (usec): min=3167, max=9849, avg=8236.01, stdev=691.54 00:16:20.202 clat percentiles (usec): 00:16:20.202 | 1.00th=[ 5604], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7701], 00:16:20.202 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:16:20.202 | 70.00th=[ 8586], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8979], 00:16:20.202 | 99.00th=[ 9372], 99.50th=[ 9372], 99.90th=[ 9896], 99.95th=[ 9896], 00:16:20.202 | 99.99th=[ 9896] 00:16:20.202 bw ( KiB/s): min=29080, max=31552, per=36.34%, avg=30316.00, stdev=1747.97, samples=2 00:16:20.202 iops : min= 7270, max= 7888, avg=7579.00, stdev=436.99, samples=2 00:16:20.202 lat (msec) : 2=0.01%, 4=0.29%, 10=99.69%, 20=0.01% 00:16:20.202 cpu : usr=5.69%, sys=8.58%, ctx=926, majf=0, minf=13 00:16:20.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:20.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.202 issued rwts: total=7194,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.202 job1: (groupid=0, jobs=1): err= 0: pid=222730: Fri Apr 26 14:53:19 2024 00:16:20.202 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:16:20.202 slat (usec): min=3, max=5065, avg=136.85, stdev=515.44 00:16:20.202 clat (usec): min=7318, max=28996, avg=17845.18, stdev=5486.58 00:16:20.202 lat (usec): min=7536, max=29000, avg=17982.03, stdev=5513.24 00:16:20.202 clat percentiles (usec): 00:16:20.202 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10683], 00:16:20.202 | 30.00th=[15008], 40.00th=[18744], 50.00th=[19268], 60.00th=[20055], 00:16:20.202 | 70.00th=[20317], 80.00th=[21103], 90.00th=[25560], 95.00th=[27657], 00:16:20.202 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:16:20.202 | 99.99th=[28967] 00:16:20.202 write: IOPS=3839, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:16:20.202 slat (usec): min=3, max=4355, avg=126.36, stdev=471.36 00:16:20.202 clat (usec): min=845, max=26701, avg=16195.70, stdev=5491.77 00:16:20.202 lat (usec): min=4401, max=26709, avg=16322.06, stdev=5510.04 00:16:20.202 clat percentiles (usec): 00:16:20.202 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8586], 20.00th=[ 9110], 00:16:20.202 | 30.00th=[11863], 40.00th=[17171], 50.00th=[17957], 60.00th=[18482], 00:16:20.202 | 70.00th=[19006], 80.00th=[19530], 90.00th=[24773], 95.00th=[25297], 00:16:20.202 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:16:20.202 | 99.99th=[26608] 00:16:20.202 bw ( KiB/s): min=12288, max=17472, per=17.83%, avg=14880.00, stdev=3665.64, samples=2 00:16:20.202 iops : min= 3072, max= 4368, avg=3720.00, stdev=916.41, samples=2 00:16:20.202 lat (usec) : 1000=0.01% 00:16:20.202 lat (msec) : 10=22.07%, 20=49.17%, 50=28.74% 00:16:20.202 cpu : usr=3.20%, sys=4.70%, ctx=724, majf=0, minf=13 00:16:20.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:20.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.202 issued rwts: total=3584,3847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.202 job2: (groupid=0, jobs=1): err= 0: pid=222737: Fri Apr 26 14:53:19 2024 00:16:20.202 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:16:20.203 slat (usec): min=3, max=6861, avg=98.26, stdev=421.65 00:16:20.203 clat (usec): min=7171, max=28571, avg=12915.33, stdev=4581.07 00:16:20.203 lat (usec): min=7175, max=28576, avg=13013.59, stdev=4602.98 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:16:20.203 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:16:20.203 | 70.00th=[11469], 80.00th=[12256], 90.00th=[19006], 95.00th=[26084], 00:16:20.203 | 99.00th=[27657], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:16:20.203 | 99.99th=[28443] 00:16:20.203 write: IOPS=5287, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1002msec); 0 zone resets 00:16:20.203 slat (usec): min=3, max=4757, avg=87.89, stdev=349.62 00:16:20.203 clat (usec): min=1769, max=25891, avg=11408.07, stdev=4069.01 00:16:20.203 lat (usec): min=1782, max=25937, avg=11495.96, stdev=4090.61 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:16:20.203 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:16:20.203 | 70.00th=[10421], 80.00th=[10814], 90.00th=[16319], 95.00th=[24773], 00:16:20.203 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:16:20.203 | 99.99th=[25822] 00:16:20.203 bw ( KiB/s): min=20480, max=20888, per=24.79%, avg=20684.00, stdev=288.50, samples=2 00:16:20.203 iops : min= 5120, max= 5222, avg=5171.00, stdev=72.12, samples=2 00:16:20.203 lat (msec) : 2=0.03%, 10=25.90%, 20=65.48%, 50=8.59% 00:16:20.203 cpu : usr=4.90%, sys=5.49%, ctx=808, majf=0, minf=11 00:16:20.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:20.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.203 issued rwts: total=5120,5298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.203 job3: (groupid=0, jobs=1): err= 0: pid=222738: Fri Apr 26 14:53:19 2024 00:16:20.203 read: IOPS=3602, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1002msec) 00:16:20.203 slat (usec): min=3, max=4254, avg=133.20, stdev=457.26 00:16:20.203 clat (usec): min=589, max=30490, avg=16961.30, stdev=5158.15 00:16:20.203 lat (usec): min=1796, max=30499, avg=17094.50, stdev=5185.46 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 7177], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:16:20.203 | 30.00th=[12256], 40.00th=[16712], 50.00th=[19268], 60.00th=[19792], 00:16:20.203 | 70.00th=[20317], 80.00th=[20317], 90.00th=[21365], 95.00th=[25297], 00:16:20.203 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28967], 99.95th=[29492], 00:16:20.203 | 99.99th=[30540] 00:16:20.203 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:16:20.203 slat (usec): min=3, max=6206, avg=120.45, stdev=425.89 00:16:20.203 clat (usec): min=3040, max=27058, avg=15805.82, stdev=5345.72 00:16:20.203 lat (usec): min=3054, max=27072, avg=15926.27, stdev=5372.47 00:16:20.203 clat percentiles (usec): 00:16:20.203 | 1.00th=[ 6783], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:16:20.203 | 30.00th=[10814], 40.00th=[11731], 50.00th=[17957], 60.00th=[18482], 00:16:20.203 | 70.00th=[18744], 80.00th=[19268], 90.00th=[24773], 95.00th=[25560], 00:16:20.203 | 99.00th=[26608], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:16:20.203 | 99.99th=[27132] 00:16:20.203 bw ( KiB/s): min=12288, max=19664, per=19.15%, avg=15976.00, stdev=5215.62, samples=2 00:16:20.203 iops : min= 3072, max= 4916, avg=3994.00, stdev=1303.90, samples=2 00:16:20.203 lat (usec) : 750=0.01% 00:16:20.203 lat (msec) : 2=0.18%, 4=0.23%, 10=9.24%, 20=64.51%, 50=25.82% 00:16:20.203 cpu : usr=3.60%, sys=5.09%, ctx=659, majf=0, minf=15 00:16:20.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:20.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.203 issued rwts: total=3610,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.203 00:16:20.203 Run status group 0 (all jobs): 00:16:20.203 READ: bw=76.0MiB/s (79.7MB/s), 14.0MiB/s-28.0MiB/s (14.7MB/s-29.4MB/s), io=76.2MiB (79.9MB), run=1002-1003msec 00:16:20.203 WRITE: bw=81.5MiB/s (85.4MB/s), 15.0MiB/s-29.9MiB/s (15.7MB/s-31.4MB/s), io=81.7MiB (85.7MB), run=1002-1003msec 00:16:20.203 00:16:20.203 Disk stats (read/write): 00:16:20.203 nvme0n1: ios=6194/6521, merge=0/0, ticks=17427/16742, in_queue=34169, util=86.47% 00:16:20.203 nvme0n2: ios=3018/3072, merge=0/0, ticks=13699/13255, in_queue=26954, util=86.79% 00:16:20.203 nvme0n3: ios=4506/4608, merge=0/0, ticks=13669/12855, in_queue=26524, util=89.05% 00:16:20.203 nvme0n4: ios=2808/3072, merge=0/0, ticks=13456/13641, in_queue=27097, util=89.71% 00:16:20.203 14:53:19 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:20.203 [global] 00:16:20.203 thread=1 00:16:20.203 invalidate=1 00:16:20.203 rw=randwrite 00:16:20.203 time_based=1 00:16:20.203 runtime=1 00:16:20.203 ioengine=libaio 00:16:20.203 direct=1 00:16:20.203 bs=4096 00:16:20.203 iodepth=128 00:16:20.203 norandommap=0 00:16:20.203 numjobs=1 00:16:20.203 00:16:20.203 verify_dump=1 00:16:20.203 verify_backlog=512 00:16:20.203 verify_state_save=0 00:16:20.203 do_verify=1 00:16:20.203 verify=crc32c-intel 00:16:20.203 [job0] 00:16:20.203 filename=/dev/nvme0n1 00:16:20.203 [job1] 00:16:20.203 filename=/dev/nvme0n2 00:16:20.203 [job2] 00:16:20.203 filename=/dev/nvme0n3 00:16:20.203 [job3] 00:16:20.203 filename=/dev/nvme0n4 00:16:20.203 Could not set queue depth (nvme0n1) 00:16:20.203 Could not set queue depth (nvme0n2) 00:16:20.203 Could not set queue depth (nvme0n3) 00:16:20.203 Could not set queue depth (nvme0n4) 00:16:20.203 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.203 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.203 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.203 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.203 fio-3.35 00:16:20.203 Starting 4 threads 00:16:21.583 00:16:21.583 job0: (groupid=0, jobs=1): err= 0: pid=222964: Fri Apr 26 14:53:21 2024 00:16:21.583 read: IOPS=5374, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1008msec) 00:16:21.583 slat (usec): min=3, max=12503, avg=87.27, stdev=426.50 00:16:21.583 clat (usec): min=356, max=28317, avg=11776.67, stdev=5560.44 00:16:21.583 lat (usec): min=464, max=33001, avg=11863.94, stdev=5595.55 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 3982], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7635], 00:16:21.583 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[11076], 00:16:21.583 | 70.00th=[14091], 80.00th=[17695], 90.00th=[20317], 95.00th=[20841], 00:16:21.583 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:16:21.583 | 99.99th=[28443] 00:16:21.583 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:16:21.583 slat (usec): min=4, max=9982, avg=84.04, stdev=433.14 00:16:21.583 clat (usec): min=763, max=35086, avg=11336.52, stdev=5741.17 00:16:21.583 lat (usec): min=1697, max=35102, avg=11420.56, stdev=5785.19 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 4113], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7701], 00:16:21.583 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8848], 00:16:21.583 | 70.00th=[11863], 80.00th=[16450], 90.00th=[21103], 95.00th=[24773], 00:16:21.583 | 99.00th=[26870], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:16:21.583 | 99.99th=[34866] 00:16:21.583 bw ( KiB/s): min=21960, max=23096, per=29.82%, avg=22528.00, stdev=803.27, samples=2 00:16:21.583 iops : min= 5490, max= 5774, avg=5632.00, stdev=200.82, samples=2 00:16:21.583 lat (usec) : 500=0.02%, 1000=0.06% 00:16:21.583 lat (msec) : 2=0.14%, 4=0.76%, 10=60.31%, 20=26.76%, 50=11.94% 00:16:21.583 cpu : usr=4.37%, sys=8.24%, ctx=760, majf=0, minf=1 00:16:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.583 issued rwts: total=5417,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.583 job1: (groupid=0, jobs=1): err= 0: pid=222965: Fri Apr 26 14:53:21 2024 00:16:21.583 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:16:21.583 slat (usec): min=3, max=10516, avg=193.57, stdev=848.58 00:16:21.583 clat (usec): min=7416, max=51716, avg=24656.08, stdev=13506.12 00:16:21.583 lat (usec): min=7422, max=52918, avg=24849.65, stdev=13625.23 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9241], 00:16:21.583 | 30.00th=[13566], 40.00th=[17695], 50.00th=[20317], 60.00th=[26870], 00:16:21.583 | 70.00th=[38536], 80.00th=[40109], 90.00th=[41681], 95.00th=[42730], 00:16:21.583 | 99.00th=[47449], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:16:21.583 | 99.99th=[51643] 00:16:21.583 write: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1011msec); 0 zone resets 00:16:21.583 slat (usec): min=3, max=12711, avg=179.16, stdev=881.51 00:16:21.583 clat (usec): min=7046, max=52366, avg=23504.00, stdev=12847.31 00:16:21.583 lat (usec): min=7348, max=52384, avg=23683.16, stdev=12956.36 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 8291], 20.00th=[ 8586], 00:16:21.583 | 30.00th=[12911], 40.00th=[17695], 50.00th=[19268], 60.00th=[28443], 00:16:21.583 | 70.00th=[35914], 80.00th=[37487], 90.00th=[39584], 95.00th=[42206], 00:16:21.583 | 99.00th=[45876], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:16:21.583 | 99.99th=[52167] 00:16:21.583 bw ( KiB/s): min= 4624, max=16024, per=13.67%, avg=10324.00, stdev=8061.02, samples=2 00:16:21.583 iops : min= 1156, max= 4006, avg=2581.00, stdev=2015.25, samples=2 00:16:21.583 lat (msec) : 10=24.90%, 20=26.34%, 50=48.34%, 100=0.42% 00:16:21.583 cpu : usr=2.57%, sys=3.86%, ctx=536, majf=0, minf=1 00:16:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.583 issued rwts: total=2560,2709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.583 job2: (groupid=0, jobs=1): err= 0: pid=222968: Fri Apr 26 14:53:21 2024 00:16:21.583 read: IOPS=4568, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:16:21.583 slat (usec): min=3, max=6523, avg=102.12, stdev=395.34 00:16:21.583 clat (usec): min=3243, max=20583, avg=13513.77, stdev=2662.88 00:16:21.583 lat (usec): min=3250, max=20595, avg=13615.89, stdev=2681.83 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 7701], 5.00th=[10552], 10.00th=[10945], 20.00th=[11863], 00:16:21.583 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:16:21.583 | 70.00th=[13960], 80.00th=[14746], 90.00th=[18220], 95.00th=[19268], 00:16:21.583 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:16:21.583 | 99.99th=[20579] 00:16:21.583 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:16:21.583 slat (usec): min=3, max=4023, avg=97.63, stdev=360.67 00:16:21.583 clat (usec): min=6756, max=21844, avg=12753.42, stdev=2177.36 00:16:21.583 lat (usec): min=6769, max=23656, avg=12851.05, stdev=2190.71 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11207], 00:16:21.583 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:16:21.583 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15401], 95.00th=[17695], 00:16:21.583 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:16:21.583 | 99.99th=[21890] 00:16:21.583 bw ( KiB/s): min=19552, max=20480, per=26.50%, avg=20016.00, stdev=656.20, samples=2 00:16:21.583 iops : min= 4888, max= 5120, avg=5004.00, stdev=164.05, samples=2 00:16:21.583 lat (msec) : 4=0.07%, 10=5.44%, 20=92.85%, 50=1.63% 00:16:21.583 cpu : usr=4.16%, sys=5.15%, ctx=752, majf=0, minf=1 00:16:21.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:21.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.583 issued rwts: total=4619,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.583 job3: (groupid=0, jobs=1): err= 0: pid=222969: Fri Apr 26 14:53:21 2024 00:16:21.583 read: IOPS=5407, BW=21.1MiB/s (22.1MB/s)(21.3MiB/1007msec) 00:16:21.583 slat (usec): min=3, max=4999, avg=88.20, stdev=347.58 00:16:21.583 clat (usec): min=3064, max=23227, avg=11577.38, stdev=2259.26 00:16:21.583 lat (usec): min=3192, max=24581, avg=11665.57, stdev=2276.23 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 5800], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[10552], 00:16:21.583 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11469], 00:16:21.583 | 70.00th=[11994], 80.00th=[13566], 90.00th=[14353], 95.00th=[15533], 00:16:21.583 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[20579], 00:16:21.583 | 99.99th=[23200] 00:16:21.583 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:16:21.583 slat (usec): min=4, max=5751, avg=83.57, stdev=328.26 00:16:21.583 clat (usec): min=671, max=25738, avg=11405.25, stdev=3761.47 00:16:21.583 lat (usec): min=2160, max=25752, avg=11488.82, stdev=3787.85 00:16:21.583 clat percentiles (usec): 00:16:21.583 | 1.00th=[ 4424], 5.00th=[ 6325], 10.00th=[ 7570], 20.00th=[ 8979], 00:16:21.583 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10945], 00:16:21.583 | 70.00th=[12518], 80.00th=[13829], 90.00th=[17695], 95.00th=[20317], 00:16:21.583 | 99.00th=[20841], 99.50th=[22152], 99.90th=[23462], 99.95th=[25822], 00:16:21.583 | 99.99th=[25822] 00:16:21.583 bw ( KiB/s): min=22192, max=22864, per=29.82%, avg=22528.00, stdev=475.18, samples=2 00:16:21.583 iops : min= 5548, max= 5716, avg=5632.00, stdev=118.79, samples=2 00:16:21.583 lat (usec) : 750=0.01% 00:16:21.584 lat (msec) : 4=0.42%, 10=25.77%, 20=70.64%, 50=3.17% 00:16:21.584 cpu : usr=4.47%, sys=7.55%, ctx=720, majf=0, minf=1 00:16:21.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:21.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.584 issued rwts: total=5445,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.584 00:16:21.584 Run status group 0 (all jobs): 00:16:21.584 READ: bw=69.7MiB/s (73.1MB/s), 9.89MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=70.5MiB (73.9MB), run=1007-1011msec 00:16:21.584 WRITE: bw=73.8MiB/s (77.4MB/s), 10.5MiB/s-21.8MiB/s (11.0MB/s-22.9MB/s), io=74.6MiB (78.2MB), run=1007-1011msec 00:16:21.584 00:16:21.584 Disk stats (read/write): 00:16:21.584 nvme0n1: ios=4878/5120, merge=0/0, ticks=24951/26817, in_queue=51768, util=85.77% 00:16:21.584 nvme0n2: ios=2175/2560, merge=0/0, ticks=15078/18269, in_queue=33347, util=86.38% 00:16:21.584 nvme0n3: ios=3961/4096, merge=0/0, ticks=44541/41982, in_queue=86523, util=88.91% 00:16:21.584 nvme0n4: ios=4608/4808, merge=0/0, ticks=31928/33447, in_queue=65375, util=89.66% 00:16:21.584 14:53:21 -- target/fio.sh@55 -- # sync 00:16:21.584 14:53:21 -- target/fio.sh@59 -- # fio_pid=223105 00:16:21.584 14:53:21 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:21.584 14:53:21 -- target/fio.sh@61 -- # sleep 3 00:16:21.584 [global] 00:16:21.584 thread=1 00:16:21.584 invalidate=1 00:16:21.584 rw=read 00:16:21.584 time_based=1 00:16:21.584 runtime=10 00:16:21.584 ioengine=libaio 00:16:21.584 direct=1 00:16:21.584 bs=4096 00:16:21.584 iodepth=1 00:16:21.584 norandommap=1 00:16:21.584 numjobs=1 00:16:21.584 00:16:21.584 [job0] 00:16:21.584 filename=/dev/nvme0n1 00:16:21.584 [job1] 00:16:21.584 filename=/dev/nvme0n2 00:16:21.584 [job2] 00:16:21.584 filename=/dev/nvme0n3 00:16:21.584 [job3] 00:16:21.584 filename=/dev/nvme0n4 00:16:21.584 Could not set queue depth (nvme0n1) 00:16:21.584 Could not set queue depth (nvme0n2) 00:16:21.584 Could not set queue depth (nvme0n3) 00:16:21.584 Could not set queue depth (nvme0n4) 00:16:21.584 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.584 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.584 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.584 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.584 fio-3.35 00:16:21.584 Starting 4 threads 00:16:24.869 14:53:24 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:24.869 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=54427648, buflen=4096 00:16:24.869 fio: pid=223199, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.869 14:53:24 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:25.127 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=91058176, buflen=4096 00:16:25.127 fio: pid=223198, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:25.128 14:53:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.128 14:53:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:25.386 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5226496, buflen=4096 00:16:25.386 fio: pid=223196, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:25.386 14:53:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.386 14:53:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:25.644 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=57810944, buflen=4096 00:16:25.644 fio: pid=223197, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:25.644 00:16:25.644 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=223196: Fri Apr 26 14:53:25 2024 00:16:25.644 read: IOPS=5147, BW=20.1MiB/s (21.1MB/s)(69.0MiB/3431msec) 00:16:25.644 slat (usec): min=4, max=15830, avg=12.86, stdev=209.27 00:16:25.644 clat (usec): min=82, max=607, avg=179.52, stdev=60.88 00:16:25.644 lat (usec): min=88, max=15983, avg=192.38, stdev=218.33 00:16:25.644 clat percentiles (usec): 00:16:25.644 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 101], 20.00th=[ 117], 00:16:25.644 | 30.00th=[ 137], 40.00th=[ 165], 50.00th=[ 188], 60.00th=[ 198], 00:16:25.644 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 249], 95.00th=[ 306], 00:16:25.644 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 408], 00:16:25.644 | 99.99th=[ 465] 00:16:25.644 bw ( KiB/s): min=17376, max=21480, per=22.45%, avg=19520.00, stdev=1501.12, samples=6 00:16:25.644 iops : min= 4344, max= 5370, avg=4880.00, stdev=375.28, samples=6 00:16:25.644 lat (usec) : 100=9.09%, 250=80.94%, 500=9.97%, 750=0.01% 00:16:25.644 cpu : usr=2.19%, sys=5.22%, ctx=17665, majf=0, minf=1 00:16:25.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 issued rwts: total=17661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.644 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=223197: Fri Apr 26 14:53:25 2024 00:16:25.644 read: IOPS=7921, BW=30.9MiB/s (32.4MB/s)(119MiB/3850msec) 00:16:25.644 slat (usec): min=4, max=16793, avg=10.64, stdev=186.91 00:16:25.644 clat (usec): min=77, max=459, avg=114.08, stdev=34.63 00:16:25.644 lat (usec): min=83, max=17066, avg=124.72, stdev=191.52 00:16:25.644 clat percentiles (usec): 00:16:25.644 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 94], 00:16:25.644 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 106], 00:16:25.644 | 70.00th=[ 114], 80.00th=[ 125], 90.00th=[ 161], 95.00th=[ 184], 00:16:25.644 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 355], 99.95th=[ 375], 00:16:25.644 | 99.99th=[ 424] 00:16:25.644 bw ( KiB/s): min=20984, max=37464, per=36.00%, avg=31297.29, stdev=5989.96, samples=7 00:16:25.644 iops : min= 5246, max= 9366, avg=7824.29, stdev=1497.52, samples=7 00:16:25.644 lat (usec) : 100=43.46%, 250=55.45%, 500=1.09% 00:16:25.644 cpu : usr=2.23%, sys=7.43%, ctx=30505, majf=0, minf=1 00:16:25.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 issued rwts: total=30499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.644 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=223198: Fri Apr 26 14:53:25 2024 00:16:25.644 read: IOPS=6995, BW=27.3MiB/s (28.7MB/s)(86.8MiB/3178msec) 00:16:25.644 slat (usec): min=4, max=15749, avg= 9.78, stdev=118.04 00:16:25.644 clat (usec): min=102, max=423, avg=131.50, stdev=19.37 00:16:25.644 lat (usec): min=108, max=15880, avg=141.28, stdev=119.74 00:16:25.644 clat percentiles (usec): 00:16:25.644 | 1.00th=[ 112], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 119], 00:16:25.644 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:16:25.644 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 163], 00:16:25.644 | 99.00th=[ 196], 99.50th=[ 223], 99.90th=[ 326], 99.95th=[ 351], 00:16:25.644 | 99.99th=[ 396] 00:16:25.644 bw ( KiB/s): min=26904, max=30024, per=32.46%, avg=28218.67, stdev=1399.36, samples=6 00:16:25.644 iops : min= 6726, max= 7506, avg=7054.67, stdev=349.84, samples=6 00:16:25.644 lat (usec) : 250=99.67%, 500=0.32% 00:16:25.644 cpu : usr=2.17%, sys=6.99%, ctx=22235, majf=0, minf=1 00:16:25.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 issued rwts: total=22232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.644 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=223199: Fri Apr 26 14:53:25 2024 00:16:25.644 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(51.9MiB/2901msec) 00:16:25.644 slat (nsec): min=4840, max=58736, avg=10836.48, stdev=5876.13 00:16:25.644 clat (usec): min=108, max=675, avg=205.45, stdev=42.01 00:16:25.644 lat (usec): min=115, max=688, avg=216.29, stdev=43.30 00:16:25.644 clat percentiles (usec): 00:16:25.644 | 1.00th=[ 121], 5.00th=[ 141], 10.00th=[ 161], 20.00th=[ 182], 00:16:25.644 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:16:25.644 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 249], 95.00th=[ 297], 00:16:25.644 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 416], 00:16:25.644 | 99.99th=[ 465] 00:16:25.644 bw ( KiB/s): min=17888, max=18864, per=21.38%, avg=18588.80, stdev=400.22, samples=5 00:16:25.644 iops : min= 4472, max= 4716, avg=4647.20, stdev=100.06, samples=5 00:16:25.644 lat (usec) : 250=90.07%, 500=9.91%, 750=0.01% 00:16:25.644 cpu : usr=1.66%, sys=5.59%, ctx=13289, majf=0, minf=1 00:16:25.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.644 issued rwts: total=13289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.645 00:16:25.645 Run status group 0 (all jobs): 00:16:25.645 READ: bw=84.9MiB/s (89.0MB/s), 17.9MiB/s-30.9MiB/s (18.8MB/s-32.4MB/s), io=327MiB (343MB), run=2901-3850msec 00:16:25.645 00:16:25.645 Disk stats (read/write): 00:16:25.645 nvme0n1: ios=17097/0, merge=0/0, ticks=3047/0, in_queue=3047, util=94.62% 00:16:25.645 nvme0n2: ios=28074/0, merge=0/0, ticks=3271/0, in_queue=3271, util=94.70% 00:16:25.645 nvme0n3: ios=21855/0, merge=0/0, ticks=2815/0, in_queue=2815, util=96.14% 00:16:25.645 nvme0n4: ios=13125/0, merge=0/0, ticks=2691/0, in_queue=2691, util=96.75% 00:16:25.903 14:53:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.903 14:53:25 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:26.161 14:53:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.161 14:53:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:26.729 14:53:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.729 14:53:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:26.986 14:53:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.986 14:53:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:27.555 14:53:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:27.555 14:53:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:27.814 14:53:27 -- target/fio.sh@69 -- # fio_status=0 00:16:27.814 14:53:27 -- target/fio.sh@70 -- # wait 223105 00:16:27.814 14:53:27 -- target/fio.sh@70 -- # fio_status=4 00:16:27.814 14:53:27 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.449 14:53:30 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.449 14:53:30 -- common/autotest_common.sh@1205 -- # local i=0 00:16:30.449 14:53:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:30.449 14:53:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.449 14:53:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:30.449 14:53:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.449 14:53:30 -- common/autotest_common.sh@1217 -- # return 0 00:16:30.449 14:53:30 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:30.449 14:53:30 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:30.449 nvmf hotplug test: fio failed as expected 00:16:30.449 14:53:30 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.449 14:53:30 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:30.449 14:53:30 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:30.449 14:53:30 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:30.449 14:53:30 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:30.449 14:53:30 -- target/fio.sh@91 -- # nvmftestfini 00:16:30.449 14:53:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:30.449 14:53:30 -- nvmf/common.sh@117 -- # sync 00:16:30.449 14:53:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:30.449 14:53:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:30.449 14:53:30 -- nvmf/common.sh@120 -- # set +e 00:16:30.449 14:53:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.449 14:53:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:30.449 rmmod nvme_rdma 00:16:30.449 rmmod nvme_fabrics 00:16:30.449 14:53:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.449 14:53:30 -- nvmf/common.sh@124 -- # set -e 00:16:30.449 14:53:30 -- nvmf/common.sh@125 -- # return 0 00:16:30.449 14:53:30 -- nvmf/common.sh@478 -- # '[' -n 220667 ']' 00:16:30.449 14:53:30 -- nvmf/common.sh@479 -- # killprocess 220667 00:16:30.449 14:53:30 -- common/autotest_common.sh@936 -- # '[' -z 220667 ']' 00:16:30.449 14:53:30 -- common/autotest_common.sh@940 -- # kill -0 220667 00:16:30.449 14:53:30 -- common/autotest_common.sh@941 -- # uname 00:16:30.449 14:53:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.450 14:53:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 220667 00:16:30.450 14:53:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.450 14:53:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.450 14:53:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 220667' 00:16:30.450 killing process with pid 220667 00:16:30.450 14:53:30 -- common/autotest_common.sh@955 -- # kill 220667 00:16:30.450 14:53:30 -- common/autotest_common.sh@960 -- # wait 220667 00:16:31.039 [2024-04-26 14:53:30.991970] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:32.490 14:53:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:32.490 14:53:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:32.490 00:16:32.490 real 0m29.973s 00:16:32.490 user 1m55.237s 00:16:32.490 sys 0m6.247s 00:16:32.490 14:53:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.490 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:32.490 ************************************ 00:16:32.490 END TEST nvmf_fio_target 00:16:32.490 ************************************ 00:16:32.490 14:53:32 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:32.490 14:53:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:32.490 14:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.490 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:32.490 ************************************ 00:16:32.490 START TEST nvmf_bdevio 00:16:32.490 ************************************ 00:16:32.490 14:53:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:32.490 * Looking for test storage... 00:16:32.490 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:32.490 14:53:32 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.490 14:53:32 -- nvmf/common.sh@7 -- # uname -s 00:16:32.490 14:53:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.490 14:53:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.490 14:53:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.491 14:53:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.491 14:53:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.491 14:53:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.491 14:53:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.491 14:53:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.491 14:53:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.491 14:53:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.491 14:53:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:32.491 14:53:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:16:32.491 14:53:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.491 14:53:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.491 14:53:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.491 14:53:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.491 14:53:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:32.491 14:53:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.491 14:53:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.491 14:53:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.491 14:53:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 14:53:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 14:53:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 14:53:32 -- paths/export.sh@5 -- # export PATH 00:16:32.491 14:53:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.491 14:53:32 -- nvmf/common.sh@47 -- # : 0 00:16:32.491 14:53:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.491 14:53:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.491 14:53:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.491 14:53:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.491 14:53:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.491 14:53:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.491 14:53:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.491 14:53:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.491 14:53:32 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.491 14:53:32 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.491 14:53:32 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:32.491 14:53:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:32.491 14:53:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.491 14:53:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:32.491 14:53:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:32.491 14:53:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:32.491 14:53:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.491 14:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.491 14:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.491 14:53:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:32.491 14:53:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:32.491 14:53:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.491 14:53:32 -- common/autotest_common.sh@10 -- # set +x 00:16:34.489 14:53:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.489 14:53:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.489 14:53:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.489 14:53:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.489 14:53:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.489 14:53:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.489 14:53:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.489 14:53:34 -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.489 14:53:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.489 14:53:34 -- nvmf/common.sh@296 -- # e810=() 00:16:34.489 14:53:34 -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.489 14:53:34 -- nvmf/common.sh@297 -- # x722=() 00:16:34.489 14:53:34 -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.489 14:53:34 -- nvmf/common.sh@298 -- # mlx=() 00:16:34.489 14:53:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.489 14:53:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.489 14:53:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:34.489 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:34.489 14:53:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.489 14:53:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:34.489 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:34.489 14:53:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:34.489 14:53:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.489 14:53:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.489 14:53:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:34.489 Found net devices under 0000:09:00.0: mlx_0_0 00:16:34.489 14:53:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.489 14:53:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.489 14:53:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:34.489 Found net devices under 0000:09:00.1: mlx_0_1 00:16:34.489 14:53:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.489 14:53:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:34.489 14:53:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:34.489 14:53:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:34.489 14:53:34 -- nvmf/common.sh@58 -- # uname 00:16:34.489 14:53:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:34.489 14:53:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:34.489 14:53:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:34.489 14:53:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:34.489 14:53:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:34.489 14:53:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:34.489 14:53:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:34.489 14:53:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:34.489 14:53:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:34.489 14:53:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:34.489 14:53:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:34.489 14:53:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.489 14:53:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:34.489 14:53:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:34.489 14:53:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.489 14:53:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:34.489 14:53:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.489 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.489 14:53:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:34.489 14:53:34 -- nvmf/common.sh@105 -- # continue 2 00:16:34.490 14:53:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@105 -- # continue 2 00:16:34.490 14:53:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:34.490 14:53:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.490 14:53:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:34.490 14:53:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:34.490 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.490 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:16:34.490 altname enp9s0f0np0 00:16:34.490 inet 192.168.100.8/24 scope global mlx_0_0 00:16:34.490 valid_lft forever preferred_lft forever 00:16:34.490 14:53:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:34.490 14:53:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.490 14:53:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:34.490 14:53:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:34.490 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:34.490 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:16:34.490 altname enp9s0f1np1 00:16:34.490 inet 192.168.100.9/24 scope global mlx_0_1 00:16:34.490 valid_lft forever preferred_lft forever 00:16:34.490 14:53:34 -- nvmf/common.sh@411 -- # return 0 00:16:34.490 14:53:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:34.490 14:53:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:34.490 14:53:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:34.490 14:53:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:34.490 14:53:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:34.490 14:53:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:34.490 14:53:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:34.490 14:53:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:34.490 14:53:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:34.490 14:53:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@105 -- # continue 2 00:16:34.490 14:53:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:34.490 14:53:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:34.490 14:53:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@105 -- # continue 2 00:16:34.490 14:53:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:34.490 14:53:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.490 14:53:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:34.490 14:53:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:34.490 14:53:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:34.490 14:53:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:34.490 192.168.100.9' 00:16:34.490 14:53:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:34.490 192.168.100.9' 00:16:34.490 14:53:34 -- nvmf/common.sh@446 -- # head -n 1 00:16:34.490 14:53:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:34.490 14:53:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:34.490 192.168.100.9' 00:16:34.490 14:53:34 -- nvmf/common.sh@447 -- # tail -n +2 00:16:34.490 14:53:34 -- nvmf/common.sh@447 -- # head -n 1 00:16:34.490 14:53:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:34.490 14:53:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:34.490 14:53:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:34.490 14:53:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:34.490 14:53:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:34.490 14:53:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:34.490 14:53:34 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:34.490 14:53:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:34.490 14:53:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.490 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:34.490 14:53:34 -- nvmf/common.sh@470 -- # nvmfpid=226234 00:16:34.490 14:53:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:34.490 14:53:34 -- nvmf/common.sh@471 -- # waitforlisten 226234 00:16:34.490 14:53:34 -- common/autotest_common.sh@817 -- # '[' -z 226234 ']' 00:16:34.490 14:53:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.490 14:53:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.490 14:53:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.490 14:53:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.490 14:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:34.773 [2024-04-26 14:53:34.610997] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:34.773 [2024-04-26 14:53:34.611153] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.773 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.773 [2024-04-26 14:53:34.747209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.041 [2024-04-26 14:53:34.975491] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.041 [2024-04-26 14:53:34.975557] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.041 [2024-04-26 14:53:34.975581] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.041 [2024-04-26 14:53:34.975600] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.041 [2024-04-26 14:53:34.975615] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.041 [2024-04-26 14:53:34.975759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.041 [2024-04-26 14:53:34.975804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:35.041 [2024-04-26 14:53:34.975824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.041 [2024-04-26 14:53:34.975826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:35.609 14:53:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.609 14:53:35 -- common/autotest_common.sh@850 -- # return 0 00:16:35.609 14:53:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:35.609 14:53:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:35.609 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:35.609 14:53:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.609 14:53:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:35.609 14:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.609 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:35.609 [2024-04-26 14:53:35.599824] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7fdf0516c940) succeed. 00:16:35.609 [2024-04-26 14:53:35.610666] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7fdf05128940) succeed. 00:16:35.868 14:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.868 14:53:35 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.868 14:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.868 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:36.128 Malloc0 00:16:36.128 14:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.128 14:53:35 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:36.128 14:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.128 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:36.128 14:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.128 14:53:35 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:36.128 14:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.128 14:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:36.128 14:53:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.128 14:53:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:36.128 14:53:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.128 14:53:36 -- common/autotest_common.sh@10 -- # set +x 00:16:36.128 [2024-04-26 14:53:36.017745] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:36.128 14:53:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.128 14:53:36 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:36.128 14:53:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:36.128 14:53:36 -- nvmf/common.sh@521 -- # config=() 00:16:36.128 14:53:36 -- nvmf/common.sh@521 -- # local subsystem config 00:16:36.128 14:53:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:36.128 14:53:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:36.128 { 00:16:36.128 "params": { 00:16:36.128 "name": "Nvme$subsystem", 00:16:36.128 "trtype": "$TEST_TRANSPORT", 00:16:36.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.128 "adrfam": "ipv4", 00:16:36.128 "trsvcid": "$NVMF_PORT", 00:16:36.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.128 "hdgst": ${hdgst:-false}, 00:16:36.128 "ddgst": ${ddgst:-false} 00:16:36.128 }, 00:16:36.128 "method": "bdev_nvme_attach_controller" 00:16:36.128 } 00:16:36.128 EOF 00:16:36.128 )") 00:16:36.128 14:53:36 -- nvmf/common.sh@543 -- # cat 00:16:36.128 14:53:36 -- nvmf/common.sh@545 -- # jq . 00:16:36.128 14:53:36 -- nvmf/common.sh@546 -- # IFS=, 00:16:36.128 14:53:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:36.128 "params": { 00:16:36.128 "name": "Nvme1", 00:16:36.128 "trtype": "rdma", 00:16:36.128 "traddr": "192.168.100.8", 00:16:36.128 "adrfam": "ipv4", 00:16:36.128 "trsvcid": "4420", 00:16:36.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.128 "hdgst": false, 00:16:36.128 "ddgst": false 00:16:36.128 }, 00:16:36.128 "method": "bdev_nvme_attach_controller" 00:16:36.128 }' 00:16:36.128 [2024-04-26 14:53:36.096836] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:36.128 [2024-04-26 14:53:36.096968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226403 ] 00:16:36.128 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.388 [2024-04-26 14:53:36.220740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:36.388 [2024-04-26 14:53:36.459077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.388 [2024-04-26 14:53:36.459137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.388 [2024-04-26 14:53:36.459142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.957 I/O targets: 00:16:36.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:36.957 00:16:36.957 00:16:36.957 CUnit - A unit testing framework for C - Version 2.1-3 00:16:36.957 http://cunit.sourceforge.net/ 00:16:36.957 00:16:36.957 00:16:36.957 Suite: bdevio tests on: Nvme1n1 00:16:36.957 Test: blockdev write read block ...passed 00:16:36.957 Test: blockdev write zeroes read block ...passed 00:16:36.957 Test: blockdev write zeroes read no split ...passed 00:16:36.957 Test: blockdev write zeroes read split ...passed 00:16:36.957 Test: blockdev write zeroes read split partial ...passed 00:16:36.957 Test: blockdev reset ...[2024-04-26 14:53:36.958402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:36.957 [2024-04-26 14:53:37.003326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:36.957 [2024-04-26 14:53:37.028792] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:36.957 passed 00:16:36.957 Test: blockdev write read 8 blocks ...passed 00:16:36.957 Test: blockdev write read size > 128k ...passed 00:16:36.957 Test: blockdev write read invalid size ...passed 00:16:36.957 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:36.957 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:36.957 Test: blockdev write read max offset ...passed 00:16:36.957 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:36.957 Test: blockdev writev readv 8 blocks ...passed 00:16:36.957 Test: blockdev writev readv 30 x 1block ...passed 00:16:36.957 Test: blockdev writev readv block ...passed 00:16:36.957 Test: blockdev writev readv size > 128k ...passed 00:16:36.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:36.957 Test: blockdev comparev and writev ...[2024-04-26 14:53:37.035788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.035843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.035875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.035908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.036200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.036238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.036265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.036292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.036559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.036594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.036647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.036945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.036985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:36.957 [2024-04-26 14:53:37.037020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:36.957 [2024-04-26 14:53:37.037047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:36.957 passed 00:16:36.957 Test: blockdev nvme passthru rw ...passed 00:16:37.218 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:53:37.037603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:37.218 [2024-04-26 14:53:37.037646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:37.218 [2024-04-26 14:53:37.037742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:37.218 [2024-04-26 14:53:37.037775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:37.218 [2024-04-26 14:53:37.037864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:37.218 [2024-04-26 14:53:37.037899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:37.218 [2024-04-26 14:53:37.037988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:37.218 [2024-04-26 14:53:37.038020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:37.218 passed 00:16:37.218 Test: blockdev nvme admin passthru ...passed 00:16:37.218 Test: blockdev copy ...passed 00:16:37.218 00:16:37.218 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.218 suites 1 1 n/a 0 0 00:16:37.218 tests 23 23 23 0 0 00:16:37.218 asserts 152 152 152 0 n/a 00:16:37.218 00:16:37.218 Elapsed time = 0.358 seconds 00:16:38.154 14:53:38 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.154 14:53:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.154 14:53:38 -- common/autotest_common.sh@10 -- # set +x 00:16:38.154 14:53:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.154 14:53:38 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:38.154 14:53:38 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:38.154 14:53:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:38.154 14:53:38 -- nvmf/common.sh@117 -- # sync 00:16:38.154 14:53:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:38.154 14:53:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:38.154 14:53:38 -- nvmf/common.sh@120 -- # set +e 00:16:38.154 14:53:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.154 14:53:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:38.154 rmmod nvme_rdma 00:16:38.154 rmmod nvme_fabrics 00:16:38.154 14:53:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.154 14:53:38 -- nvmf/common.sh@124 -- # set -e 00:16:38.154 14:53:38 -- nvmf/common.sh@125 -- # return 0 00:16:38.154 14:53:38 -- nvmf/common.sh@478 -- # '[' -n 226234 ']' 00:16:38.154 14:53:38 -- nvmf/common.sh@479 -- # killprocess 226234 00:16:38.154 14:53:38 -- common/autotest_common.sh@936 -- # '[' -z 226234 ']' 00:16:38.154 14:53:38 -- common/autotest_common.sh@940 -- # kill -0 226234 00:16:38.154 14:53:38 -- common/autotest_common.sh@941 -- # uname 00:16:38.154 14:53:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.154 14:53:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 226234 00:16:38.154 14:53:38 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:38.154 14:53:38 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:38.154 14:53:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 226234' 00:16:38.154 killing process with pid 226234 00:16:38.154 14:53:38 -- common/autotest_common.sh@955 -- # kill 226234 00:16:38.154 14:53:38 -- common/autotest_common.sh@960 -- # wait 226234 00:16:38.722 [2024-04-26 14:53:38.598479] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:40.107 14:53:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:40.107 14:53:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:40.107 00:16:40.107 real 0m7.538s 00:16:40.107 user 0m22.535s 00:16:40.107 sys 0m2.358s 00:16:40.107 14:53:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.107 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 ************************************ 00:16:40.107 END TEST nvmf_bdevio 00:16:40.107 ************************************ 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:16:40.107 14:53:39 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:16:40.107 14:53:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.107 14:53:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.107 14:53:39 -- common/autotest_common.sh@10 -- # set +x 00:16:40.107 ************************************ 00:16:40.107 START TEST nvmf_device_removal 00:16:40.107 ************************************ 00:16:40.107 14:53:40 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:16:40.107 * Looking for test storage... 00:16:40.107 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.107 14:53:40 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:16:40.107 14:53:40 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:40.107 14:53:40 -- common/autotest_common.sh@34 -- # set -e 00:16:40.107 14:53:40 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:40.107 14:53:40 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:40.107 14:53:40 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:16:40.107 14:53:40 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:40.107 14:53:40 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:16:40.107 14:53:40 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:40.107 14:53:40 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:40.107 14:53:40 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:40.107 14:53:40 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:40.107 14:53:40 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:40.107 14:53:40 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:40.107 14:53:40 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:40.107 14:53:40 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:40.107 14:53:40 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:40.107 14:53:40 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:40.107 14:53:40 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:40.107 14:53:40 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:40.107 14:53:40 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:40.107 14:53:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:40.107 14:53:40 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:40.107 14:53:40 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:40.107 14:53:40 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:16:40.107 14:53:40 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:40.107 14:53:40 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:40.107 14:53:40 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:40.107 14:53:40 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:40.107 14:53:40 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:40.107 14:53:40 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:40.107 14:53:40 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:40.107 14:53:40 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:40.107 14:53:40 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:40.107 14:53:40 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:40.107 14:53:40 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:40.107 14:53:40 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:40.107 14:53:40 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:40.107 14:53:40 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:40.107 14:53:40 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:40.107 14:53:40 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:16:40.107 14:53:40 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:40.107 14:53:40 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:40.107 14:53:40 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:40.107 14:53:40 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:40.107 14:53:40 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:40.107 14:53:40 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:40.107 14:53:40 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:40.107 14:53:40 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:16:40.107 14:53:40 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:16:40.107 14:53:40 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:40.107 14:53:40 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:16:40.107 14:53:40 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:16:40.107 14:53:40 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:16:40.107 14:53:40 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:16:40.107 14:53:40 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:16:40.107 14:53:40 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:16:40.107 14:53:40 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:16:40.107 14:53:40 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:16:40.107 14:53:40 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:16:40.107 14:53:40 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:16:40.107 14:53:40 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:16:40.107 14:53:40 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:16:40.107 14:53:40 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:16:40.107 14:53:40 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:16:40.107 14:53:40 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:16:40.107 14:53:40 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:16:40.107 14:53:40 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:16:40.107 14:53:40 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:16:40.107 14:53:40 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:40.107 14:53:40 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:16:40.107 14:53:40 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:16:40.107 14:53:40 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:16:40.108 14:53:40 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:16:40.108 14:53:40 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:16:40.108 14:53:40 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:16:40.108 14:53:40 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:16:40.108 14:53:40 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:16:40.108 14:53:40 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:16:40.108 14:53:40 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:16:40.108 14:53:40 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:16:40.108 14:53:40 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:40.108 14:53:40 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:16:40.108 14:53:40 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:16:40.108 14:53:40 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:16:40.108 14:53:40 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:16:40.108 14:53:40 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:16:40.108 14:53:40 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:16:40.108 14:53:40 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:16:40.108 14:53:40 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:40.108 14:53:40 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:16:40.108 14:53:40 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:40.108 14:53:40 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:40.108 14:53:40 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:40.108 14:53:40 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:40.108 14:53:40 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:40.108 14:53:40 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:40.108 14:53:40 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:40.108 14:53:40 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:16:40.108 14:53:40 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:40.108 #define SPDK_CONFIG_H 00:16:40.108 #define SPDK_CONFIG_APPS 1 00:16:40.108 #define SPDK_CONFIG_ARCH native 00:16:40.108 #define SPDK_CONFIG_ASAN 1 00:16:40.108 #undef SPDK_CONFIG_AVAHI 00:16:40.108 #undef SPDK_CONFIG_CET 00:16:40.108 #define SPDK_CONFIG_COVERAGE 1 00:16:40.108 #define SPDK_CONFIG_CROSS_PREFIX 00:16:40.108 #undef SPDK_CONFIG_CRYPTO 00:16:40.108 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:40.108 #undef SPDK_CONFIG_CUSTOMOCF 00:16:40.108 #undef SPDK_CONFIG_DAOS 00:16:40.108 #define SPDK_CONFIG_DAOS_DIR 00:16:40.108 #define SPDK_CONFIG_DEBUG 1 00:16:40.108 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:40.108 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:16:40.108 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:40.108 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:40.108 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:40.108 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:16:40.108 #define SPDK_CONFIG_EXAMPLES 1 00:16:40.108 #undef SPDK_CONFIG_FC 00:16:40.108 #define SPDK_CONFIG_FC_PATH 00:16:40.108 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:40.108 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:40.108 #undef SPDK_CONFIG_FUSE 00:16:40.108 #undef SPDK_CONFIG_FUZZER 00:16:40.108 #define SPDK_CONFIG_FUZZER_LIB 00:16:40.108 #undef SPDK_CONFIG_GOLANG 00:16:40.108 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:40.108 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:40.108 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:40.108 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:16:40.108 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:40.108 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:40.108 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:40.108 #define SPDK_CONFIG_IDXD 1 00:16:40.108 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:40.108 #undef SPDK_CONFIG_IPSEC_MB 00:16:40.108 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:40.108 #define SPDK_CONFIG_ISAL 1 00:16:40.108 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:40.108 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:40.108 #define SPDK_CONFIG_LIBDIR 00:16:40.108 #undef SPDK_CONFIG_LTO 00:16:40.108 #define SPDK_CONFIG_MAX_LCORES 00:16:40.108 #define SPDK_CONFIG_NVME_CUSE 1 00:16:40.108 #undef SPDK_CONFIG_OCF 00:16:40.108 #define SPDK_CONFIG_OCF_PATH 00:16:40.108 #define SPDK_CONFIG_OPENSSL_PATH 00:16:40.108 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:40.108 #define SPDK_CONFIG_PGO_DIR 00:16:40.108 #undef SPDK_CONFIG_PGO_USE 00:16:40.108 #define SPDK_CONFIG_PREFIX /usr/local 00:16:40.108 #undef SPDK_CONFIG_RAID5F 00:16:40.108 #undef SPDK_CONFIG_RBD 00:16:40.108 #define SPDK_CONFIG_RDMA 1 00:16:40.108 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:40.108 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:40.108 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:40.108 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:40.108 #define SPDK_CONFIG_SHARED 1 00:16:40.108 #undef SPDK_CONFIG_SMA 00:16:40.108 #define SPDK_CONFIG_TESTS 1 00:16:40.108 #undef SPDK_CONFIG_TSAN 00:16:40.108 #define SPDK_CONFIG_UBLK 1 00:16:40.108 #define SPDK_CONFIG_UBSAN 1 00:16:40.108 #undef SPDK_CONFIG_UNIT_TESTS 00:16:40.108 #undef SPDK_CONFIG_URING 00:16:40.108 #define SPDK_CONFIG_URING_PATH 00:16:40.108 #undef SPDK_CONFIG_URING_ZNS 00:16:40.108 #undef SPDK_CONFIG_USDT 00:16:40.108 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:40.108 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:40.108 #undef SPDK_CONFIG_VFIO_USER 00:16:40.108 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:40.108 #define SPDK_CONFIG_VHOST 1 00:16:40.108 #define SPDK_CONFIG_VIRTIO 1 00:16:40.108 #undef SPDK_CONFIG_VTUNE 00:16:40.108 #define SPDK_CONFIG_VTUNE_DIR 00:16:40.108 #define SPDK_CONFIG_WERROR 1 00:16:40.108 #define SPDK_CONFIG_WPDK_DIR 00:16:40.108 #undef SPDK_CONFIG_XNVME 00:16:40.108 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:40.108 14:53:40 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:40.108 14:53:40 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:40.108 14:53:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.108 14:53:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.108 14:53:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.108 14:53:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.108 14:53:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.108 14:53:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.108 14:53:40 -- paths/export.sh@5 -- # export PATH 00:16:40.108 14:53:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.108 14:53:40 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:16:40.108 14:53:40 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:16:40.108 14:53:40 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:16:40.108 14:53:40 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:16:40.108 14:53:40 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:40.108 14:53:40 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:16:40.108 14:53:40 -- pm/common@67 -- # TEST_TAG=N/A 00:16:40.108 14:53:40 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:16:40.108 14:53:40 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:16:40.108 14:53:40 -- pm/common@71 -- # uname -s 00:16:40.108 14:53:40 -- pm/common@71 -- # PM_OS=Linux 00:16:40.108 14:53:40 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:40.108 14:53:40 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:16:40.108 14:53:40 -- pm/common@76 -- # [[ Linux == Linux ]] 00:16:40.108 14:53:40 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:16:40.108 14:53:40 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:16:40.108 14:53:40 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:40.108 14:53:40 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:40.108 14:53:40 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:16:40.108 14:53:40 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:16:40.108 14:53:40 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:16:40.108 14:53:40 -- common/autotest_common.sh@57 -- # : 0 00:16:40.108 14:53:40 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:16:40.108 14:53:40 -- common/autotest_common.sh@61 -- # : 0 00:16:40.108 14:53:40 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:40.108 14:53:40 -- common/autotest_common.sh@63 -- # : 0 00:16:40.108 14:53:40 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:16:40.108 14:53:40 -- common/autotest_common.sh@65 -- # : 1 00:16:40.108 14:53:40 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:40.108 14:53:40 -- common/autotest_common.sh@67 -- # : 0 00:16:40.108 14:53:40 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:16:40.109 14:53:40 -- common/autotest_common.sh@69 -- # : 00:16:40.109 14:53:40 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:16:40.109 14:53:40 -- common/autotest_common.sh@71 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:16:40.109 14:53:40 -- common/autotest_common.sh@73 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:16:40.109 14:53:40 -- common/autotest_common.sh@75 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:16:40.109 14:53:40 -- common/autotest_common.sh@77 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:40.109 14:53:40 -- common/autotest_common.sh@79 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:16:40.109 14:53:40 -- common/autotest_common.sh@81 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:16:40.109 14:53:40 -- common/autotest_common.sh@83 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:16:40.109 14:53:40 -- common/autotest_common.sh@85 -- # : 1 00:16:40.109 14:53:40 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:16:40.109 14:53:40 -- common/autotest_common.sh@87 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:16:40.109 14:53:40 -- common/autotest_common.sh@89 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:16:40.109 14:53:40 -- common/autotest_common.sh@91 -- # : 1 00:16:40.109 14:53:40 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:16:40.109 14:53:40 -- common/autotest_common.sh@93 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:16:40.109 14:53:40 -- common/autotest_common.sh@95 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:40.109 14:53:40 -- common/autotest_common.sh@97 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:16:40.109 14:53:40 -- common/autotest_common.sh@99 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:16:40.109 14:53:40 -- common/autotest_common.sh@101 -- # : rdma 00:16:40.109 14:53:40 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:40.109 14:53:40 -- common/autotest_common.sh@103 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:16:40.109 14:53:40 -- common/autotest_common.sh@105 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:16:40.109 14:53:40 -- common/autotest_common.sh@107 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:16:40.109 14:53:40 -- common/autotest_common.sh@109 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:16:40.109 14:53:40 -- common/autotest_common.sh@111 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:16:40.109 14:53:40 -- common/autotest_common.sh@113 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:16:40.109 14:53:40 -- common/autotest_common.sh@115 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:16:40.109 14:53:40 -- common/autotest_common.sh@117 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:40.109 14:53:40 -- common/autotest_common.sh@119 -- # : 1 00:16:40.109 14:53:40 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:16:40.109 14:53:40 -- common/autotest_common.sh@121 -- # : 1 00:16:40.109 14:53:40 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:16:40.109 14:53:40 -- common/autotest_common.sh@123 -- # : 00:16:40.109 14:53:40 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:40.109 14:53:40 -- common/autotest_common.sh@125 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:16:40.109 14:53:40 -- common/autotest_common.sh@127 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:16:40.109 14:53:40 -- common/autotest_common.sh@129 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:16:40.109 14:53:40 -- common/autotest_common.sh@131 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:16:40.109 14:53:40 -- common/autotest_common.sh@133 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:16:40.109 14:53:40 -- common/autotest_common.sh@135 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:16:40.109 14:53:40 -- common/autotest_common.sh@137 -- # : 00:16:40.109 14:53:40 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:16:40.109 14:53:40 -- common/autotest_common.sh@139 -- # : true 00:16:40.109 14:53:40 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:16:40.109 14:53:40 -- common/autotest_common.sh@141 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:16:40.109 14:53:40 -- common/autotest_common.sh@143 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:16:40.109 14:53:40 -- common/autotest_common.sh@145 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:16:40.109 14:53:40 -- common/autotest_common.sh@147 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:16:40.109 14:53:40 -- common/autotest_common.sh@149 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:16:40.109 14:53:40 -- common/autotest_common.sh@151 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:16:40.109 14:53:40 -- common/autotest_common.sh@153 -- # : mlx5 00:16:40.109 14:53:40 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:16:40.109 14:53:40 -- common/autotest_common.sh@155 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:16:40.109 14:53:40 -- common/autotest_common.sh@157 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:16:40.109 14:53:40 -- common/autotest_common.sh@159 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:16:40.109 14:53:40 -- common/autotest_common.sh@161 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:16:40.109 14:53:40 -- common/autotest_common.sh@163 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:16:40.109 14:53:40 -- common/autotest_common.sh@166 -- # : 00:16:40.109 14:53:40 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:16:40.109 14:53:40 -- common/autotest_common.sh@168 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:16:40.109 14:53:40 -- common/autotest_common.sh@170 -- # : 0 00:16:40.109 14:53:40 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:40.109 14:53:40 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:40.109 14:53:40 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:40.109 14:53:40 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:40.109 14:53:40 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:16:40.109 14:53:40 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:16:40.109 14:53:40 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:40.109 14:53:40 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:16:40.109 14:53:40 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:40.109 14:53:40 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:40.109 14:53:40 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:40.109 14:53:40 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:40.109 14:53:40 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:40.109 14:53:40 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:16:40.109 14:53:40 -- common/autotest_common.sh@199 -- # cat 00:16:40.109 14:53:40 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:16:40.109 14:53:40 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:40.110 14:53:40 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:40.110 14:53:40 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:40.110 14:53:40 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:40.110 14:53:40 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:16:40.110 14:53:40 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:16:40.110 14:53:40 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:40.110 14:53:40 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:16:40.110 14:53:40 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:40.110 14:53:40 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:16:40.110 14:53:40 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:40.110 14:53:40 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:40.110 14:53:40 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:40.110 14:53:40 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:40.110 14:53:40 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:40.110 14:53:40 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:40.110 14:53:40 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:40.110 14:53:40 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:40.110 14:53:40 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:16:40.110 14:53:40 -- common/autotest_common.sh@252 -- # export valgrind= 00:16:40.110 14:53:40 -- common/autotest_common.sh@252 -- # valgrind= 00:16:40.110 14:53:40 -- common/autotest_common.sh@258 -- # uname -s 00:16:40.110 14:53:40 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:16:40.110 14:53:40 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:16:40.110 14:53:40 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:16:40.110 14:53:40 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:16:40.110 14:53:40 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@268 -- # MAKE=make 00:16:40.110 14:53:40 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:16:40.110 14:53:40 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:16:40.110 14:53:40 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:16:40.110 14:53:40 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:16:40.110 14:53:40 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:16:40.110 14:53:40 -- common/autotest_common.sh@289 -- # for i in "$@" 00:16:40.110 14:53:40 -- common/autotest_common.sh@290 -- # case "$i" in 00:16:40.110 14:53:40 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:16:40.110 14:53:40 -- common/autotest_common.sh@307 -- # [[ -z 226985 ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@307 -- # kill -0 226985 00:16:40.110 14:53:40 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:16:40.110 14:53:40 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:16:40.110 14:53:40 -- common/autotest_common.sh@320 -- # local mount target_dir 00:16:40.110 14:53:40 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:16:40.110 14:53:40 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:16:40.110 14:53:40 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:16:40.110 14:53:40 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:16:40.110 14:53:40 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.hXMA0d 00:16:40.110 14:53:40 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:40.110 14:53:40 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hXMA0d/tests/target /tmp/spdk.hXMA0d 00:16:40.110 14:53:40 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@316 -- # df -T 00:16:40.110 14:53:40 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=56683675648 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=5311033344 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=30989012992 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=8339456 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=12376539136 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=22405120 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=30997049344 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=307200 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:16:40.110 14:53:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:16:40.110 14:53:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:16:40.110 14:53:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:40.110 14:53:40 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:16:40.110 * Looking for test storage... 00:16:40.110 14:53:40 -- common/autotest_common.sh@357 -- # local target_space new_size 00:16:40.110 14:53:40 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:16:40.110 14:53:40 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.110 14:53:40 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:40.110 14:53:40 -- common/autotest_common.sh@361 -- # mount=/ 00:16:40.110 14:53:40 -- common/autotest_common.sh@363 -- # target_space=56683675648 00:16:40.110 14:53:40 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:16:40.110 14:53:40 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:16:40.110 14:53:40 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:16:40.110 14:53:40 -- common/autotest_common.sh@370 -- # new_size=7525625856 00:16:40.110 14:53:40 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:40.110 14:53:40 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.110 14:53:40 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.110 14:53:40 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.110 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.110 14:53:40 -- common/autotest_common.sh@378 -- # return 0 00:16:40.110 14:53:40 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:16:40.110 14:53:40 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:16:40.110 14:53:40 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:40.110 14:53:40 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:40.110 14:53:40 -- common/autotest_common.sh@1673 -- # true 00:16:40.110 14:53:40 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:16:40.371 14:53:40 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:16:40.371 14:53:40 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:16:40.371 14:53:40 -- common/autotest_common.sh@27 -- # exec 00:16:40.371 14:53:40 -- common/autotest_common.sh@29 -- # exec 00:16:40.371 14:53:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:40.371 14:53:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:40.371 14:53:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:40.371 14:53:40 -- common/autotest_common.sh@18 -- # set -x 00:16:40.371 14:53:40 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.371 14:53:40 -- nvmf/common.sh@7 -- # uname -s 00:16:40.371 14:53:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.371 14:53:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.371 14:53:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.371 14:53:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.371 14:53:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.371 14:53:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.371 14:53:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.371 14:53:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.371 14:53:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.371 14:53:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.371 14:53:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:16:40.371 14:53:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:16:40.371 14:53:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.371 14:53:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.371 14:53:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.371 14:53:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.371 14:53:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:40.371 14:53:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.371 14:53:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.371 14:53:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.371 14:53:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.371 14:53:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.371 14:53:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.371 14:53:40 -- paths/export.sh@5 -- # export PATH 00:16:40.371 14:53:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.371 14:53:40 -- nvmf/common.sh@47 -- # : 0 00:16:40.371 14:53:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.371 14:53:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.371 14:53:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.371 14:53:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.371 14:53:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.371 14:53:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.371 14:53:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.371 14:53:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.371 14:53:40 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:16:40.371 14:53:40 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:16:40.371 14:53:40 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.371 14:53:40 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:16:40.371 14:53:40 -- target/device_removal.sh@18 -- # nvmftestinit 00:16:40.371 14:53:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:40.371 14:53:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.372 14:53:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:40.372 14:53:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:40.372 14:53:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:40.372 14:53:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.372 14:53:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.372 14:53:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.372 14:53:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:40.372 14:53:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:40.372 14:53:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.372 14:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:42.278 14:53:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:42.278 14:53:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.278 14:53:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.278 14:53:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.278 14:53:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.278 14:53:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.278 14:53:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.278 14:53:42 -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.278 14:53:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.278 14:53:42 -- nvmf/common.sh@296 -- # e810=() 00:16:42.278 14:53:42 -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.278 14:53:42 -- nvmf/common.sh@297 -- # x722=() 00:16:42.278 14:53:42 -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.278 14:53:42 -- nvmf/common.sh@298 -- # mlx=() 00:16:42.278 14:53:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.278 14:53:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.278 14:53:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.278 14:53:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:42.278 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:42.278 14:53:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.278 14:53:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.278 14:53:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:42.278 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:42.278 14:53:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.278 14:53:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.278 14:53:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.278 14:53:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.278 14:53:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:42.278 Found net devices under 0000:09:00.0: mlx_0_0 00:16:42.278 14:53:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.278 14:53:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.278 14:53:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.278 14:53:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:42.278 Found net devices under 0000:09:00.1: mlx_0_1 00:16:42.278 14:53:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.278 14:53:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:42.278 14:53:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:42.278 14:53:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:42.278 14:53:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:42.278 14:53:42 -- nvmf/common.sh@58 -- # uname 00:16:42.278 14:53:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:42.278 14:53:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:42.278 14:53:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:42.278 14:53:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:42.278 14:53:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:42.278 14:53:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:42.278 14:53:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:42.278 14:53:42 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:42.278 14:53:42 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:42.278 14:53:42 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:42.278 14:53:42 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:42.278 14:53:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.278 14:53:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.278 14:53:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.278 14:53:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.278 14:53:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.278 14:53:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.278 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@105 -- # continue 2 00:16:42.279 14:53:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@105 -- # continue 2 00:16:42.279 14:53:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.279 14:53:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.279 14:53:42 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:42.279 14:53:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:42.279 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:42.279 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:16:42.279 altname enp9s0f0np0 00:16:42.279 inet 192.168.100.8/24 scope global mlx_0_0 00:16:42.279 valid_lft forever preferred_lft forever 00:16:42.279 14:53:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.279 14:53:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.279 14:53:42 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:42.279 14:53:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:42.279 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:42.279 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:16:42.279 altname enp9s0f1np1 00:16:42.279 inet 192.168.100.9/24 scope global mlx_0_1 00:16:42.279 valid_lft forever preferred_lft forever 00:16:42.279 14:53:42 -- nvmf/common.sh@411 -- # return 0 00:16:42.279 14:53:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:42.279 14:53:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:42.279 14:53:42 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:42.279 14:53:42 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:42.279 14:53:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.279 14:53:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.279 14:53:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.279 14:53:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.279 14:53:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.279 14:53:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@105 -- # continue 2 00:16:42.279 14:53:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.279 14:53:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:42.279 14:53:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@105 -- # continue 2 00:16:42.279 14:53:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.279 14:53:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.279 14:53:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.279 14:53:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.279 14:53:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.279 14:53:42 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:42.279 192.168.100.9' 00:16:42.279 14:53:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:42.279 192.168.100.9' 00:16:42.279 14:53:42 -- nvmf/common.sh@446 -- # head -n 1 00:16:42.279 14:53:42 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:42.279 14:53:42 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:42.279 192.168.100.9' 00:16:42.279 14:53:42 -- nvmf/common.sh@447 -- # tail -n +2 00:16:42.279 14:53:42 -- nvmf/common.sh@447 -- # head -n 1 00:16:42.279 14:53:42 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:42.279 14:53:42 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:42.279 14:53:42 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:42.279 14:53:42 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:42.279 14:53:42 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:42.279 14:53:42 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:42.279 14:53:42 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:16:42.279 14:53:42 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:16:42.279 14:53:42 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:16:42.279 14:53:42 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:16:42.279 14:53:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.279 14:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.279 14:53:42 -- common/autotest_common.sh@10 -- # set +x 00:16:42.279 ************************************ 00:16:42.279 START TEST nvmf_device_removal_pci_remove_no_srq 00:16:42.279 ************************************ 00:16:42.279 14:53:42 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:16:42.279 14:53:42 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:16:42.279 14:53:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:42.279 14:53:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:42.279 14:53:42 -- common/autotest_common.sh@10 -- # set +x 00:16:42.279 14:53:42 -- nvmf/common.sh@470 -- # nvmfpid=228638 00:16:42.279 14:53:42 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:42.279 14:53:42 -- nvmf/common.sh@471 -- # waitforlisten 228638 00:16:42.279 14:53:42 -- common/autotest_common.sh@817 -- # '[' -z 228638 ']' 00:16:42.279 14:53:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.279 14:53:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.279 14:53:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.279 14:53:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.279 14:53:42 -- common/autotest_common.sh@10 -- # set +x 00:16:42.538 [2024-04-26 14:53:42.430486] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:42.538 [2024-04-26 14:53:42.430627] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.538 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.538 [2024-04-26 14:53:42.562973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.795 [2024-04-26 14:53:42.812310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.795 [2024-04-26 14:53:42.812389] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.795 [2024-04-26 14:53:42.812413] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.795 [2024-04-26 14:53:42.812436] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.795 [2024-04-26 14:53:42.812455] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.795 [2024-04-26 14:53:42.812555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.795 [2024-04-26 14:53:42.812563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.364 14:53:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.364 14:53:43 -- common/autotest_common.sh@850 -- # return 0 00:16:43.364 14:53:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:43.364 14:53:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:43.364 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.364 14:53:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.364 14:53:43 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:16:43.364 14:53:43 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:16:43.364 14:53:43 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:16:43.364 14:53:43 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:16:43.364 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.364 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.364 [2024-04-26 14:53:43.377772] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027c40/0x7f1211445940) succeed. 00:16:43.364 [2024-04-26 14:53:43.390044] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027dc0/0x7f12113fe940) succeed. 00:16:43.364 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.364 14:53:43 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:16:43.364 14:53:43 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:43.364 14:53:43 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:43.364 14:53:43 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:43.364 14:53:43 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:43.364 14:53:43 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:43.364 14:53:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.364 14:53:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.364 14:53:43 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:43.364 14:53:43 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:43.364 14:53:43 -- nvmf/common.sh@105 -- # continue 2 00:16:43.364 14:53:43 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.364 14:53:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.364 14:53:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:43.364 14:53:43 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.364 14:53:43 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:43.364 14:53:43 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:43.364 14:53:43 -- nvmf/common.sh@105 -- # continue 2 00:16:43.364 14:53:43 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:16:43.364 14:53:43 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@25 -- # local -a dev_name 00:16:43.364 14:53:43 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:16:43.364 14:53:43 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:43.364 14:53:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:43.364 14:53:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.364 14:53:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.364 14:53:43 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:16:43.364 14:53:43 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:16:43.364 14:53:43 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:16:43.364 14:53:43 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:16:43.364 14:53:43 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:16:43.364 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.364 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.624 14:53:43 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:16:43.624 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.624 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.624 14:53:43 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:16:43.624 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.624 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.624 14:53:43 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:16:43.624 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.624 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.624 [2024-04-26 14:53:43.606576] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:43.624 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.624 14:53:43 -- target/device_removal.sh@41 -- # return 0 00:16:43.624 14:53:43 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:16:43.624 14:53:43 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:16:43.624 14:53:43 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@25 -- # local -a dev_name 00:16:43.624 14:53:43 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:16:43.624 14:53:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:43.624 14:53:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:43.624 14:53:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.624 14:53:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.624 14:53:43 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:16:43.624 14:53:43 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:16:43.624 14:53:43 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:16:43.624 14:53:43 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:16:43.624 14:53:43 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:16:43.624 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.624 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.903 14:53:43 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:16:43.903 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.903 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.903 14:53:43 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:16:43.903 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.903 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.903 14:53:43 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:16:43.903 14:53:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.903 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 [2024-04-26 14:53:43.795548] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:16:43.903 14:53:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.903 14:53:43 -- target/device_removal.sh@41 -- # return 0 00:16:43.903 14:53:43 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:16:43.903 14:53:43 -- target/device_removal.sh@53 -- # return 0 00:16:43.903 14:53:43 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:16:43.903 14:53:43 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:16:43.903 14:53:43 -- target/device_removal.sh@87 -- # local dev_names 00:16:43.903 14:53:43 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:43.903 14:53:43 -- target/device_removal.sh@91 -- # bdevperf_pid=228812 00:16:43.903 14:53:43 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.903 14:53:43 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:43.903 14:53:43 -- target/device_removal.sh@94 -- # waitforlisten 228812 /var/tmp/bdevperf.sock 00:16:43.903 14:53:43 -- common/autotest_common.sh@817 -- # '[' -z 228812 ']' 00:16:43.903 14:53:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.903 14:53:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:43.903 14:53:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.903 14:53:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:43.903 14:53:43 -- common/autotest_common.sh@10 -- # set +x 00:16:44.836 14:53:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:44.836 14:53:44 -- common/autotest_common.sh@850 -- # return 0 00:16:44.836 14:53:44 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:44.836 14:53:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.836 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:16:44.836 14:53:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.836 14:53:44 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:16:44.836 14:53:44 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:16:44.836 14:53:44 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:16:44.836 14:53:44 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:16:44.836 14:53:44 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:16:44.836 14:53:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:44.836 14:53:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:44.836 14:53:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.836 14:53:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.836 14:53:44 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:16:44.836 14:53:44 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:16:44.836 14:53:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.836 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:16:44.836 Nvme_mlx_0_0n1 00:16:44.836 14:53:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:44.836 14:53:44 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:16:44.836 14:53:44 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:16:44.836 14:53:44 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:16:44.836 14:53:44 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:16:44.836 14:53:44 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:16:44.836 14:53:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:44.837 14:53:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:44.837 14:53:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.837 14:53:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.837 14:53:44 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:16:44.837 14:53:44 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:16:44.837 14:53:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:44.837 14:53:44 -- common/autotest_common.sh@10 -- # set +x 00:16:45.096 Nvme_mlx_0_1n1 00:16:45.096 14:53:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.096 14:53:44 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=228958 00:16:45.096 14:53:44 -- target/device_removal.sh@112 -- # sleep 5 00:16:45.096 14:53:44 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:50.380 14:53:49 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:50.380 14:53:49 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:16:50.380 14:53:49 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:16:50.380 14:53:50 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:16:50.380 14:53:50 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:16:50.380 14:53:50 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:16:50.380 14:53:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:50.380 14:53:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:50.380 14:53:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:50.380 14:53:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:50.380 14:53:50 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:16:50.380 14:53:50 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:16:50.380 14:53:50 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:16:50.380 14:53:50 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:50.380 14:53:50 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:50.380 14:53:50 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:50.380 14:53:50 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:50.380 14:53:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.380 14:53:50 -- common/autotest_common.sh@10 -- # set +x 00:16:50.380 14:53:50 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:50.380 14:53:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.380 mlx5_0 00:16:50.380 14:53:50 -- target/device_removal.sh@78 -- # return 0 00:16:50.380 14:53:50 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@67 -- # echo 1 00:16:50.380 14:53:50 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:50.380 14:53:50 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:16:50.380 [2024-04-26 14:53:50.089601] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:16:50.380 [2024-04-26 14:53:50.089783] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:50.380 [2024-04-26 14:53:50.090587] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:53.675 14:53:53 -- target/device_removal.sh@147 -- # seq 1 10 00:16:53.675 14:53:53 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:16:53.675 14:53:53 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:53.675 14:53:53 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:53.675 14:53:53 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:53.675 14:53:53 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:53.675 14:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.675 14:53:53 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:53.675 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:16:53.675 14:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.675 14:53:53 -- target/device_removal.sh@78 -- # return 1 00:16:53.675 14:53:53 -- target/device_removal.sh@149 -- # break 00:16:53.675 14:53:53 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:53.675 14:53:53 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:53.675 14:53:53 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:53.675 14:53:53 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:53.675 14:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.675 14:53:53 -- common/autotest_common.sh@10 -- # set +x 00:16:53.675 14:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.675 14:53:53 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:16:53.675 14:53:53 -- target/device_removal.sh@160 -- # rescan_pci 00:16:53.675 14:53:53 -- target/device_removal.sh@57 -- # echo 1 00:16:54.613 [2024-04-26 14:53:54.544199] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x617000006600, err 11. Skip rescan. 00:16:54.613 14:53:54 -- target/device_removal.sh@162 -- # seq 1 10 00:16:54.613 14:53:54 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:16:54.613 14:53:54 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:16:54.613 14:53:54 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:16:54.613 14:53:54 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:16:54.613 14:53:54 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:16:54.613 14:53:54 -- target/device_removal.sh@171 -- # break 00:16:54.613 14:53:54 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:16:54.613 14:53:54 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:16:54.613 [2024-04-26 14:53:54.647015] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a4c0/0x7f1202f89940) succeed. 00:16:54.613 [2024-04-26 14:53:54.647150] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:16:55.552 14:53:55 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:16:55.552 14:53:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:55.552 14:53:55 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:16:55.552 14:53:55 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:16:55.552 14:53:55 -- target/device_removal.sh@186 -- # seq 1 10 00:16:55.552 14:53:55 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:16:55.552 14:53:55 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:55.552 14:53:55 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:55.552 14:53:55 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:55.552 14:53:55 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:55.552 14:53:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.552 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 [2024-04-26 14:53:55.565363] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:55.552 [2024-04-26 14:53:55.565414] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:16:55.552 [2024-04-26 14:53:55.565446] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:55.552 [2024-04-26 14:53:55.565482] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:55.552 14:53:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.552 14:53:55 -- target/device_removal.sh@187 -- # ib_count=2 00:16:55.552 14:53:55 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:16:55.552 14:53:55 -- target/device_removal.sh@189 -- # break 00:16:55.552 14:53:55 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:55.552 14:53:55 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:16:55.552 14:53:55 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:16:55.552 14:53:55 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:16:55.552 14:53:55 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:16:55.552 14:53:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:55.552 14:53:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:55.552 14:53:55 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:16:55.552 14:53:55 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:55.552 14:53:55 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:16:55.552 14:53:55 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:16:55.552 14:53:55 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:16:55.552 14:53:55 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:16:55.552 14:53:55 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:55.552 14:53:55 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:55.552 14:53:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:55.552 14:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 14:53:55 -- target/device_removal.sh@77 -- # grep mlx5_1 00:16:55.814 14:53:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:55.814 mlx5_1 00:16:55.814 14:53:55 -- target/device_removal.sh@78 -- # return 0 00:16:55.814 14:53:55 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:16:55.814 14:53:55 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:16:55.814 14:53:55 -- target/device_removal.sh@67 -- # echo 1 00:16:55.814 14:53:55 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:16:55.814 14:53:55 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:55.814 14:53:55 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:16:55.814 [2024-04-26 14:53:55.669256] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:16:55.814 [2024-04-26 14:53:55.669386] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:55.814 [2024-04-26 14:53:55.671856] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:55.814 [2024-04-26 14:53:55.671904] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:16:55.814 [2024-04-26 14:53:55.671924] rdma.c: 632:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:16:55.814 [2024-04-26 14:53:55.671941] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.814 [2024-04-26 14:53:55.671956] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.814 [2024-04-26 14:53:55.671987] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.814 [2024-04-26 14:53:55.672000] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.814 [2024-04-26 14:53:55.672014] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.814 [2024-04-26 14:53:55.672028] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.814 [2024-04-26 14:53:55.672058] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.814 [2024-04-26 14:53:55.672072] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.814 [2024-04-26 14:53:55.672086] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.814 [2024-04-26 14:53:55.672122] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672147] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672163] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672177] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672197] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672229] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672245] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672260] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672274] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672288] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672301] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672316] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672330] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672344] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672358] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672371] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672385] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672399] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672429] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672446] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672462] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672478] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672493] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672509] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672524] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672540] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672558] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672586] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672600] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672612] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672639] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672653] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672666] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672680] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672693] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672706] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672719] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672733] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672747] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672760] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672774] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672787] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672816] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672833] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672848] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672864] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.672884] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.672901] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672917] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672933] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.672948] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.672964] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673000] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673016] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673036] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673054] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673071] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673085] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673104] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673151] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673172] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673194] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673211] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673225] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673239] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673253] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673267] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673282] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673296] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673310] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673324] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673338] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673352] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673366] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673380] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673394] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673434] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673448] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673461] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673490] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673506] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673521] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.815 [2024-04-26 14:53:55.673537] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.815 [2024-04-26 14:53:55.673552] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673568] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673584] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673599] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673616] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.815 [2024-04-26 14:53:55.673648] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.815 [2024-04-26 14:53:55.673663] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673676] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673705] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673718] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673731] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673744] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673758] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673771] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673784] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673796] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673810] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673823] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673837] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673850] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673879] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673895] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.673911] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.673926] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.673941] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.673956] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.673971] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.673988] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674004] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674019] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674035] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674065] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674079] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674092] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674105] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674152] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674168] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674182] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674196] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674210] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674225] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674239] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674253] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674268] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674282] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674296] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674309] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674327] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674344] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674359] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674388] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674403] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674417] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674434] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674465] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674479] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674492] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674520] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674535] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674550] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674564] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674593] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674607] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674621] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674650] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674665] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674679] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674692] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674706] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674720] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674734] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674748] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674762] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674776] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674790] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674804] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674819] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674832] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674847] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674861] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674889] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674903] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674917] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.674932] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.674946] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674959] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.674973] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.674986] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.675003] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.675018] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.675032] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:16:55.816 [2024-04-26 14:53:55.675046] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:16:55.816 [2024-04-26 14:53:55.675075] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.675089] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.675103] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.675123] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.675146] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.675161] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:16:55.816 [2024-04-26 14:53:55.675175] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:16:55.816 [2024-04-26 14:53:55.675189] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:17:00.016 14:53:59 -- target/device_removal.sh@147 -- # seq 1 10 00:17:00.016 14:53:59 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:17:00.016 14:53:59 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:17:00.016 14:53:59 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:17:00.016 14:53:59 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:00.016 14:53:59 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:00.016 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.016 14:53:59 -- target/device_removal.sh@77 -- # grep mlx5_1 00:17:00.016 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:17:00.016 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.016 14:53:59 -- target/device_removal.sh@78 -- # return 1 00:17:00.016 14:53:59 -- target/device_removal.sh@149 -- # break 00:17:00.016 14:53:59 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:00.016 14:53:59 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:00.016 14:53:59 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:00.016 14:53:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.016 14:53:59 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:00.016 14:53:59 -- common/autotest_common.sh@10 -- # set +x 00:17:00.016 14:53:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.016 14:53:59 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:17:00.016 14:53:59 -- target/device_removal.sh@160 -- # rescan_pci 00:17:00.016 14:53:59 -- target/device_removal.sh@57 -- # echo 1 00:17:00.274 [2024-04-26 14:54:00.173915] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x617000006280, err 11. Skip rescan. 00:17:00.274 14:54:00 -- target/device_removal.sh@162 -- # seq 1 10 00:17:00.274 14:54:00 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:17:00.274 14:54:00 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:17:00.274 14:54:00 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:17:00.274 14:54:00 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:17:00.274 14:54:00 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:17:00.274 14:54:00 -- target/device_removal.sh@171 -- # break 00:17:00.274 14:54:00 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:17:00.274 14:54:00 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:17:00.274 [2024-04-26 14:54:00.275982] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027c40/0x7f1207279940) succeed. 00:17:00.274 [2024-04-26 14:54:00.276151] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:17:01.209 14:54:01 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:17:01.209 14:54:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:01.209 14:54:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:01.209 14:54:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:01.209 14:54:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:01.209 14:54:01 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:17:01.209 14:54:01 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:17:01.209 14:54:01 -- target/device_removal.sh@186 -- # seq 1 10 00:17:01.209 14:54:01 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:17:01.209 14:54:01 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:01.209 14:54:01 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:01.209 14:54:01 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:01.209 14:54:01 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:01.209 14:54:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.209 14:54:01 -- common/autotest_common.sh@10 -- # set +x 00:17:01.209 [2024-04-26 14:54:01.190067] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:17:01.209 [2024-04-26 14:54:01.190115] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:17:01.209 [2024-04-26 14:54:01.190168] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:01.209 [2024-04-26 14:54:01.190192] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:01.209 14:54:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.209 14:54:01 -- target/device_removal.sh@187 -- # ib_count=2 00:17:01.209 14:54:01 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:17:01.209 14:54:01 -- target/device_removal.sh@189 -- # break 00:17:01.209 14:54:01 -- target/device_removal.sh@200 -- # stop_bdevperf 00:17:01.209 14:54:01 -- target/device_removal.sh@116 -- # wait 228958 00:18:22.694 0 00:18:22.694 14:55:15 -- target/device_removal.sh@118 -- # killprocess 228812 00:18:22.694 14:55:15 -- common/autotest_common.sh@936 -- # '[' -z 228812 ']' 00:18:22.694 14:55:15 -- common/autotest_common.sh@940 -- # kill -0 228812 00:18:22.694 14:55:15 -- common/autotest_common.sh@941 -- # uname 00:18:22.694 14:55:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.694 14:55:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 228812 00:18:22.694 14:55:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:22.694 14:55:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:22.694 14:55:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 228812' 00:18:22.694 killing process with pid 228812 00:18:22.694 14:55:15 -- common/autotest_common.sh@955 -- # kill 228812 00:18:22.694 14:55:15 -- common/autotest_common.sh@960 -- # wait 228812 00:18:22.694 14:55:16 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:18:22.694 14:55:16 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:22.694 [2024-04-26 14:53:43.883055] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:22.694 [2024-04-26 14:53:43.883218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228812 ] 00:18:22.694 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.694 [2024-04-26 14:53:44.004662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.694 [2024-04-26 14:53:44.224061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.694 Running I/O for 90 seconds... 00:18:22.694 [2024-04-26 14:53:50.091206] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:22.694 [2024-04-26 14:53:50.091273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.694 [2024-04-26 14:53:50.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.694 [2024-04-26 14:53:50.091339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.694 [2024-04-26 14:53:50.091364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.695 [2024-04-26 14:53:50.091401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.695 [2024-04-26 14:53:50.091423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.695 [2024-04-26 14:53:50.091448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.695 [2024-04-26 14:53:50.091469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.695 [2024-04-26 14:53:50.093268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.695 [2024-04-26 14:53:50.093303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:22.695 [2024-04-26 14:53:50.093388] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:22.695 [2024-04-26 14:53:50.101151] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.111182] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.121205] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.131238] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.141257] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.151294] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.161484] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.171519] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.181543] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.191579] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.202435] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.212465] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.222490] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.233452] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.243475] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.254193] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.264386] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.274421] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.285343] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.295373] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.305425] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.315867] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.325887] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.336747] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.346849] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.356881] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.367493] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.377522] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.387547] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.397581] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.407606] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.417644] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.427661] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.437695] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.447715] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.457745] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.467767] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.477796] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.487816] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.497851] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.508155] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.518262] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.528364] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.538402] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.548596] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.558630] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.568663] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.579145] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.589156] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.599483] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.609492] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.619928] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.630732] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.640766] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.650786] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.661194] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.671216] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.681249] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.691821] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.701852] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.711981] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.722011] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.732035] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.742403] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.752419] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.762454] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.772508] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.783033] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.793193] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.803845] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.813861] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.823969] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.833990] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.844024] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.854048] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.864079] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.874101] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.884140] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.894158] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.904196] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.914217] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.924248] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.695 [2024-04-26 14:53:50.934268] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.944304] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.954329] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.965453] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.975475] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.986309] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:50.996328] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.006360] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.016382] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.026416] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.036669] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.046703] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.056725] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.067229] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.077246] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.088136] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.696 [2024-04-26 14:53:51.096163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.096947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.096975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.696 [2024-04-26 14:53:51.097934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.696 [2024-04-26 14:53:51.097959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.097984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.098964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.098989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.099965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.099989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.100016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.697 [2024-04-26 14:53:51.100072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.697 [2024-04-26 14:53:51.100097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.100959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.100983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.698 [2024-04-26 14:53:51.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000076ff000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007701000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007703000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007705000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007707000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007709000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770b000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770d000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770f000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007711000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007713000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.101967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.101992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007715000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.102042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007717000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.102068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.698 [2024-04-26 14:53:51.102093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007719000 len:0x1000 key:0x186900 00:18:22.698 [2024-04-26 14:53:51.102122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771b000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771d000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771f000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007721000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007723000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007725000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007727000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007729000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772b000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772d000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772f000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007731000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007733000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007735000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007737000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.102966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.102992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007739000 len:0x1000 key:0x186900 00:18:22.699 [2024-04-26 14:53:51.103021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.138121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.699 [2024-04-26 14:53:51.138166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.699 [2024-04-26 14:53:51.138218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47344 len:8 PRP1 0x0 PRP2 0x0 00:18:22.699 [2024-04-26 14:53:51.138244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.699 [2024-04-26 14:53:51.142455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:22.699 [2024-04-26 14:53:51.142927] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:22.699 [2024-04-26 14:53:51.142962] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.699 [2024-04-26 14:53:51.142986] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:18:22.699 [2024-04-26 14:53:51.143028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.699 [2024-04-26 14:53:51.143058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:22.699 [2024-04-26 14:53:51.143094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:22.699 [2024-04-26 14:53:51.143137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:22.699 [2024-04-26 14:53:51.143175] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:22.699 [2024-04-26 14:53:51.143237] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.699 [2024-04-26 14:53:51.143275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:22.699 [2024-04-26 14:53:53.150603] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.700 [2024-04-26 14:53:53.150676] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:18:22.700 [2024-04-26 14:53:53.150740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.700 [2024-04-26 14:53:53.150767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:22.700 [2024-04-26 14:53:53.151473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:22.700 [2024-04-26 14:53:53.151503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:22.700 [2024-04-26 14:53:53.151529] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:22.700 [2024-04-26 14:53:53.151616] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.700 [2024-04-26 14:53:53.151651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:22.700 [2024-04-26 14:53:55.156690] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.700 [2024-04-26 14:53:55.156767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:18:22.700 [2024-04-26 14:53:55.156825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.700 [2024-04-26 14:53:55.156854] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:22.700 [2024-04-26 14:53:55.156888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:22.700 [2024-04-26 14:53:55.156914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:22.700 [2024-04-26 14:53:55.156936] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:22.700 [2024-04-26 14:53:55.157001] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.700 [2024-04-26 14:53:55.157028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:22.700 [2024-04-26 14:53:55.667954] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:22.700 [2024-04-26 14:53:55.668018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.700 [2024-04-26 14:53:55.668072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32616 cdw0:0 sqhd:8c60 p:0 m:0 dnr:0 00:18:22.700 [2024-04-26 14:53:55.668096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.700 [2024-04-26 14:53:55.668150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32616 cdw0:0 sqhd:8c60 p:0 m:0 dnr:0 00:18:22.700 [2024-04-26 14:53:55.668175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.700 [2024-04-26 14:53:55.668200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32616 cdw0:0 sqhd:8c60 p:0 m:0 dnr:0 00:18:22.700 [2024-04-26 14:53:55.668221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.700 [2024-04-26 14:53:55.668244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32616 cdw0:0 sqhd:8c60 p:0 m:0 dnr:0 00:18:22.700 [2024-04-26 14:53:55.676712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.700 [2024-04-26 14:53:55.676758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:22.700 [2024-04-26 14:53:55.676818] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:22.700 [2024-04-26 14:53:55.677904] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.687922] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.697960] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.707980] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.718011] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.728033] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.738075] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.748099] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.758135] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.768152] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.778182] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.788206] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.798240] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.808261] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.818298] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.828320] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.838353] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.848373] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.858409] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.868426] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.878467] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.888498] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.898520] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.908542] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.918575] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.928596] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.938630] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.948650] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.958681] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.968702] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.978732] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.988754] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:55.998786] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.008807] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.018843] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.028862] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.038892] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.048914] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.058946] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.068970] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.079000] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.089020] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.099051] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.109073] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.119122] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.129138] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.139168] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.149192] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.159225] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.169254] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.190999] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.200924] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.204762] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.700 [2024-04-26 14:53:56.210950] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.220979] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.231003] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.700 [2024-04-26 14:53:56.241037] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.251059] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.261092] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.271112] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.281151] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.291173] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.301202] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.311226] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.321259] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.331283] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.341318] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.351337] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.361371] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.371393] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.381427] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.391445] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.401491] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.411513] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.421546] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.431572] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.441603] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.451624] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.461654] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.471678] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.481713] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.491737] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.501771] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.511788] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.521816] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.531844] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.541868] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.551894] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.561923] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.571950] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.581980] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.592005] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.602032] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.612060] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.622087] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.632114] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.642146] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.652172] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.662200] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.672226] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:22.701 [2024-04-26 14:53:56.679311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.679941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.679980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.701 [2024-04-26 14:53:56.680284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.701 [2024-04-26 14:53:56.680308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.680999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.681960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.681983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.682004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.682028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.682049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.682072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.682093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.682116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.682146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.702 [2024-04-26 14:53:56.682171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.702 [2024-04-26 14:53:56.682192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.703 [2024-04-26 14:53:56.682601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000078ff000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007901000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007903000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007905000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007907000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007909000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790b000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790d000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.682971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.682994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790f000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007911000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007913000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007915000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007917000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007919000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791b000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791d000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791f000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007921000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007923000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007925000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007927000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007929000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792b000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792d000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792f000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007931000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.703 [2024-04-26 14:53:56.683853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007933000 len:0x1000 key:0x1c1b00 00:18:22.703 [2024-04-26 14:53:56.683874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.683898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007935000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.683922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.683946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007937000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.683967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.683990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007939000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793b000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793d000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793f000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007941000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007943000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007945000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007947000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007949000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794b000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794d000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794f000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007951000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007953000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007955000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007957000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007959000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795b000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795d000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795f000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.684957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.684980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007961000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007963000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007965000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007967000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007969000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796b000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796d000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796f000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.685402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007971000 len:0x1000 key:0x1c1b00 00:18:22.704 [2024-04-26 14:53:56.685422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32616 cdw0:0 sqhd:b020 p:0 m:0 dnr:0 00:18:22.704 [2024-04-26 14:53:56.719644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.704 [2024-04-26 14:53:56.719674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.704 [2024-04-26 14:53:56.719695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47568 len:8 PRP1 0x0 PRP2 0x0 00:18:22.704 [2024-04-26 14:53:56.719716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.705 [2024-04-26 14:53:56.719914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:22.705 [2024-04-26 14:53:56.720313] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:22.705 [2024-04-26 14:53:56.720345] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.705 [2024-04-26 14:53:56.720363] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:18:22.705 [2024-04-26 14:53:56.720402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.705 [2024-04-26 14:53:56.720432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:22.705 [2024-04-26 14:53:56.720465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:22.705 [2024-04-26 14:53:56.720487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:22.705 [2024-04-26 14:53:56.720524] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:22.705 [2024-04-26 14:53:56.720575] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.705 [2024-04-26 14:53:56.720599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:22.705 [2024-04-26 14:53:58.726265] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.705 [2024-04-26 14:53:58.726336] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:18:22.705 [2024-04-26 14:53:58.726396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.705 [2024-04-26 14:53:58.726423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:22.705 [2024-04-26 14:53:58.727851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:22.705 [2024-04-26 14:53:58.727883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:22.705 [2024-04-26 14:53:58.727906] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:22.705 [2024-04-26 14:53:58.729297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.705 [2024-04-26 14:53:58.729331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:22.705 [2024-04-26 14:54:00.735566] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.705 [2024-04-26 14:54:00.735637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:18:22.705 [2024-04-26 14:54:00.735693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.705 [2024-04-26 14:54:00.735720] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:22.705 [2024-04-26 14:54:00.735775] bdev_nvme.c:2872:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:18:22.705 [2024-04-26 14:54:00.736977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:22.705 [2024-04-26 14:54:00.737010] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:22.705 [2024-04-26 14:54:00.737034] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:22.705 [2024-04-26 14:54:00.737108] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.705 [2024-04-26 14:54:00.737190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:22.705 [2024-04-26 14:54:01.739755] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:22.705 [2024-04-26 14:54:01.739826] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:18:22.705 [2024-04-26 14:54:01.739887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:22.705 [2024-04-26 14:54:01.739912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:22.705 [2024-04-26 14:54:01.740565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:22.705 [2024-04-26 14:54:01.740604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:22.705 [2024-04-26 14:54:01.740628] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:22.705 [2024-04-26 14:54:01.740723] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.705 [2024-04-26 14:54:01.740754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:22.705 [2024-04-26 14:54:02.798180] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.705 00:18:22.705 Latency(us) 00:18:22.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.705 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.705 Verification LBA range: start 0x0 length 0x8000 00:18:22.705 Nvme_mlx_0_0n1 : 90.01 7431.40 29.03 0.00 0.00 17199.30 3737.98 7108568.56 00:18:22.705 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.705 Verification LBA range: start 0x0 length 0x8000 00:18:22.705 Nvme_mlx_0_1n1 : 90.01 6998.06 27.34 0.00 0.00 18263.81 449.04 8102773.95 00:18:22.705 =================================================================================================================== 00:18:22.705 Total : 14429.46 56.37 0.00 0.00 17715.56 449.04 8102773.95 00:18:22.705 Received shutdown signal, test time was about 90.000000 seconds 00:18:22.705 00:18:22.705 Latency(us) 00:18:22.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.705 =================================================================================================================== 00:18:22.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.705 14:55:16 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:18:22.705 14:55:16 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:22.705 14:55:16 -- target/device_removal.sh@202 -- # killprocess 228638 00:18:22.705 14:55:16 -- common/autotest_common.sh@936 -- # '[' -z 228638 ']' 00:18:22.705 14:55:16 -- common/autotest_common.sh@940 -- # kill -0 228638 00:18:22.705 14:55:16 -- common/autotest_common.sh@941 -- # uname 00:18:22.705 14:55:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.705 14:55:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 228638 00:18:22.705 14:55:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.705 14:55:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.705 14:55:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 228638' 00:18:22.705 killing process with pid 228638 00:18:22.705 14:55:16 -- common/autotest_common.sh@955 -- # kill 228638 00:18:22.705 14:55:16 -- common/autotest_common.sh@960 -- # wait 228638 00:18:22.705 [2024-04-26 14:55:16.556069] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:22.705 14:55:18 -- target/device_removal.sh@203 -- # nvmfpid= 00:18:22.705 14:55:18 -- target/device_removal.sh@205 -- # return 0 00:18:22.705 00:18:22.705 real 1m36.173s 00:18:22.705 user 4m37.298s 00:18:22.705 sys 0m3.077s 00:18:22.705 14:55:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.705 14:55:18 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 ************************************ 00:18:22.705 END TEST nvmf_device_removal_pci_remove_no_srq 00:18:22.705 ************************************ 00:18:22.705 14:55:18 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:18:22.705 14:55:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:22.705 14:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.705 14:55:18 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 ************************************ 00:18:22.705 START TEST nvmf_device_removal_pci_remove 00:18:22.705 ************************************ 00:18:22.705 14:55:18 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:18:22.705 14:55:18 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:18:22.705 14:55:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:22.705 14:55:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.705 14:55:18 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 14:55:18 -- nvmf/common.sh@470 -- # nvmfpid=240720 00:18:22.705 14:55:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:22.705 14:55:18 -- nvmf/common.sh@471 -- # waitforlisten 240720 00:18:22.705 14:55:18 -- common/autotest_common.sh@817 -- # '[' -z 240720 ']' 00:18:22.705 14:55:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.705 14:55:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.705 14:55:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.705 14:55:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.705 14:55:18 -- common/autotest_common.sh@10 -- # set +x 00:18:22.705 [2024-04-26 14:55:18.725193] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:22.705 [2024-04-26 14:55:18.725330] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.705 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.705 [2024-04-26 14:55:18.844750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:22.705 [2024-04-26 14:55:19.061854] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.705 [2024-04-26 14:55:19.061924] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.705 [2024-04-26 14:55:19.061943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.705 [2024-04-26 14:55:19.061962] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.705 [2024-04-26 14:55:19.061977] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.705 [2024-04-26 14:55:19.062089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.705 [2024-04-26 14:55:19.062094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.705 14:55:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.705 14:55:19 -- common/autotest_common.sh@850 -- # return 0 00:18:22.705 14:55:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.706 14:55:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.706 14:55:19 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.706 14:55:19 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:18:22.706 14:55:19 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:18:22.706 14:55:19 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:18:22.706 14:55:19 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:22.706 14:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:19 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 [2024-04-26 14:55:19.727198] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027c40/0x7f8e0c561940) succeed. 00:18:22.706 [2024-04-26 14:55:19.739013] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027dc0/0x7f8e0c51d940) succeed. 00:18:22.706 14:55:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:19 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:18:22.706 14:55:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:22.706 14:55:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:22.706 14:55:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:22.706 14:55:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:22.706 14:55:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:22.706 14:55:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:22.706 14:55:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.706 14:55:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:22.706 14:55:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:22.706 14:55:19 -- nvmf/common.sh@105 -- # continue 2 00:18:22.706 14:55:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:22.706 14:55:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.706 14:55:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:22.706 14:55:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:22.706 14:55:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:22.706 14:55:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:22.706 14:55:19 -- nvmf/common.sh@105 -- # continue 2 00:18:22.706 14:55:19 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:22.706 14:55:19 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@25 -- # local -a dev_name 00:18:22.706 14:55:19 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:18:22.706 14:55:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:22.706 14:55:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:22.706 14:55:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:22.706 14:55:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:22.706 14:55:19 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:18:22.706 14:55:19 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:18:22.706 14:55:19 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:22.706 14:55:19 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:22.706 14:55:19 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:18:22.706 14:55:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:19 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 [2024-04-26 14:55:20.127366] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@41 -- # return 0 00:18:22.706 14:55:20 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:18:22.706 14:55:20 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:18:22.706 14:55:20 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@25 -- # local -a dev_name 00:18:22.706 14:55:20 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:18:22.706 14:55:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:22.706 14:55:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:22.706 14:55:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:22.706 14:55:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:22.706 14:55:20 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:18:22.706 14:55:20 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:18:22.706 14:55:20 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:18:22.706 14:55:20 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:18:22.706 14:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 [2024-04-26 14:55:20.315473] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:22.706 14:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:20 -- target/device_removal.sh@41 -- # return 0 00:18:22.706 14:55:20 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@53 -- # return 0 00:18:22.706 14:55:20 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:18:22.706 14:55:20 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:18:22.706 14:55:20 -- target/device_removal.sh@87 -- # local dev_names 00:18:22.706 14:55:20 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:22.706 14:55:20 -- target/device_removal.sh@91 -- # bdevperf_pid=240906 00:18:22.706 14:55:20 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.706 14:55:20 -- target/device_removal.sh@94 -- # waitforlisten 240906 /var/tmp/bdevperf.sock 00:18:22.706 14:55:20 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:22.706 14:55:20 -- common/autotest_common.sh@817 -- # '[' -z 240906 ']' 00:18:22.706 14:55:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.706 14:55:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.706 14:55:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.706 14:55:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.706 14:55:20 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.706 14:55:21 -- common/autotest_common.sh@850 -- # return 0 00:18:22.706 14:55:21 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:22.706 14:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:21 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 14:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:21 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:22.706 14:55:21 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:18:22.706 14:55:21 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:18:22.706 14:55:21 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:18:22.706 14:55:21 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:18:22.706 14:55:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:22.706 14:55:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:22.706 14:55:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:22.706 14:55:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:22.706 14:55:21 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:18:22.706 14:55:21 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:18:22.706 14:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.706 14:55:21 -- common/autotest_common.sh@10 -- # set +x 00:18:22.706 Nvme_mlx_0_0n1 00:18:22.706 14:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.706 14:55:21 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:18:22.707 14:55:21 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:18:22.707 14:55:21 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:18:22.707 14:55:21 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:18:22.707 14:55:21 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:18:22.707 14:55:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:22.707 14:55:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:22.707 14:55:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:22.707 14:55:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:22.707 14:55:21 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:18:22.707 14:55:21 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:18:22.707 14:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.707 14:55:21 -- common/autotest_common.sh@10 -- # set +x 00:18:22.707 Nvme_mlx_0_1n1 00:18:22.707 14:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.707 14:55:21 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=241052 00:18:22.707 14:55:21 -- target/device_removal.sh@112 -- # sleep 5 00:18:22.707 14:55:21 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:26.898 14:55:26 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:26.898 14:55:26 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:26.898 14:55:26 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:18:26.898 14:55:26 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:18:26.898 14:55:26 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:18:26.898 14:55:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:26.898 14:55:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:26.898 14:55:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:26.898 14:55:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:26.898 14:55:26 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:18:26.898 14:55:26 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:26.898 14:55:26 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:18:26.898 14:55:26 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:26.898 14:55:26 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:26.898 14:55:26 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:26.898 14:55:26 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:26.898 14:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.898 14:55:26 -- common/autotest_common.sh@10 -- # set +x 00:18:26.898 14:55:26 -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:26.898 14:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.898 mlx5_0 00:18:26.898 14:55:26 -- target/device_removal.sh@78 -- # return 0 00:18:26.898 14:55:26 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@67 -- # echo 1 00:18:26.898 14:55:26 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:18:26.898 14:55:26 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:18:26.898 [2024-04-26 14:55:26.642420] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:18:26.898 [2024-04-26 14:55:26.642575] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:26.898 [2024-04-26 14:55:26.642696] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:26.898 [2024-04-26 14:55:26.642742] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:18:31.093 14:55:30 -- target/device_removal.sh@147 -- # seq 1 10 00:18:31.093 14:55:30 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:31.093 14:55:30 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:18:31.093 14:55:30 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:18:31.093 14:55:30 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:31.094 14:55:30 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:31.094 14:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.094 14:55:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 14:55:30 -- target/device_removal.sh@77 -- # grep mlx5_0 00:18:31.094 14:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.094 14:55:30 -- target/device_removal.sh@78 -- # return 1 00:18:31.094 14:55:30 -- target/device_removal.sh@149 -- # break 00:18:31.094 14:55:30 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:31.094 14:55:30 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:31.094 14:55:30 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:31.094 14:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.094 14:55:30 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:31.094 14:55:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.094 14:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.094 14:55:30 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:31.094 14:55:30 -- target/device_removal.sh@160 -- # rescan_pci 00:18:31.094 14:55:30 -- target/device_removal.sh@57 -- # echo 1 00:18:31.354 [2024-04-26 14:55:31.346318] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x617000006600, err 11. Skip rescan. 00:18:31.354 14:55:31 -- target/device_removal.sh@162 -- # seq 1 10 00:18:31.354 14:55:31 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:31.354 14:55:31 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:18:31.354 14:55:31 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:18:31.354 14:55:31 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:18:31.354 14:55:31 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:18:31.354 14:55:31 -- target/device_removal.sh@171 -- # break 00:18:31.354 14:55:31 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:18:31.354 14:55:31 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:18:31.614 [2024-04-26 14:55:31.456350] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a4c0/0x7f8dfd991940) succeed. 00:18:31.614 [2024-04-26 14:55:31.456495] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:18:32.550 14:55:32 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:18:32.550 14:55:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:32.550 14:55:32 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:32.550 14:55:32 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:18:32.550 14:55:32 -- target/device_removal.sh@186 -- # seq 1 10 00:18:32.550 14:55:32 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:32.550 14:55:32 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:32.550 14:55:32 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:32.550 14:55:32 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:32.550 14:55:32 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:32.550 14:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:32.550 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:18:32.550 [2024-04-26 14:55:32.381644] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:32.550 [2024-04-26 14:55:32.381714] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:18:32.550 [2024-04-26 14:55:32.381747] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:32.550 [2024-04-26 14:55:32.381773] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:32.550 14:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:32.550 14:55:32 -- target/device_removal.sh@187 -- # ib_count=2 00:18:32.550 14:55:32 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:32.550 14:55:32 -- target/device_removal.sh@189 -- # break 00:18:32.550 14:55:32 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:18:32.550 14:55:32 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:32.550 14:55:32 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:18:32.550 14:55:32 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:18:32.550 14:55:32 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:18:32.550 14:55:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:32.550 14:55:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:32.550 14:55:32 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:18:32.550 14:55:32 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:32.550 14:55:32 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:18:32.550 14:55:32 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:32.550 14:55:32 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:32.550 14:55:32 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:32.550 14:55:32 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:32.550 14:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:32.550 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:18:32.550 14:55:32 -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:32.550 14:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:32.550 mlx5_1 00:18:32.550 14:55:32 -- target/device_removal.sh@78 -- # return 0 00:18:32.550 14:55:32 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@67 -- # echo 1 00:18:32.550 14:55:32 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:18:32.550 14:55:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:18:32.551 [2024-04-26 14:55:32.487078] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:18:32.551 [2024-04-26 14:55:32.487260] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:32.551 [2024-04-26 14:55:32.487358] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:32.551 [2024-04-26 14:55:32.487389] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 97 00:18:36.869 14:55:36 -- target/device_removal.sh@147 -- # seq 1 10 00:18:36.869 14:55:36 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:18:36.869 14:55:36 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:18:36.869 14:55:36 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:18:36.869 14:55:36 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:36.869 14:55:36 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:18:36.869 14:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.869 14:55:36 -- common/autotest_common.sh@10 -- # set +x 00:18:36.869 14:55:36 -- target/device_removal.sh@77 -- # grep mlx5_1 00:18:36.869 14:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.869 14:55:36 -- target/device_removal.sh@78 -- # return 1 00:18:36.869 14:55:36 -- target/device_removal.sh@149 -- # break 00:18:36.869 14:55:36 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:36.869 14:55:36 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:36.869 14:55:36 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:36.869 14:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.869 14:55:36 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:36.869 14:55:36 -- common/autotest_common.sh@10 -- # set +x 00:18:36.869 14:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.869 14:55:36 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:18:36.869 14:55:36 -- target/device_removal.sh@160 -- # rescan_pci 00:18:36.869 14:55:36 -- target/device_removal.sh@57 -- # echo 1 00:18:37.127 [2024-04-26 14:55:36.997296] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x617000006280, err 11. Skip rescan. 00:18:37.127 14:55:37 -- target/device_removal.sh@162 -- # seq 1 10 00:18:37.127 14:55:37 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:18:37.127 14:55:37 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:18:37.127 14:55:37 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:18:37.127 14:55:37 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:18:37.127 14:55:37 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:18:37.127 14:55:37 -- target/device_removal.sh@171 -- # break 00:18:37.127 14:55:37 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:18:37.127 14:55:37 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:18:37.127 [2024-04-26 14:55:37.106925] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027c40/0x7f8e01d09940) succeed. 00:18:37.127 [2024-04-26 14:55:37.107085] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:18:38.061 14:55:37 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:18:38.061 14:55:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:38.061 14:55:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:38.061 14:55:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:38.061 14:55:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:38.061 14:55:37 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:18:38.061 14:55:37 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:18:38.061 14:55:37 -- target/device_removal.sh@186 -- # seq 1 10 00:18:38.061 14:55:37 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:18:38.061 14:55:37 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:18:38.061 14:55:37 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:18:38.061 14:55:37 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:18:38.061 14:55:37 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:18:38.061 14:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.061 14:55:37 -- common/autotest_common.sh@10 -- # set +x 00:18:38.061 [2024-04-26 14:55:37.998248] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:18:38.061 [2024-04-26 14:55:37.998309] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:18:38.061 [2024-04-26 14:55:37.998342] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:38.061 [2024-04-26 14:55:37.998366] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:18:38.061 14:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.061 14:55:38 -- target/device_removal.sh@187 -- # ib_count=2 00:18:38.061 14:55:38 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:18:38.061 14:55:38 -- target/device_removal.sh@189 -- # break 00:18:38.061 14:55:38 -- target/device_removal.sh@200 -- # stop_bdevperf 00:18:38.061 14:55:38 -- target/device_removal.sh@116 -- # wait 241052 00:19:59.506 0 00:19:59.506 14:56:51 -- target/device_removal.sh@118 -- # killprocess 240906 00:19:59.506 14:56:51 -- common/autotest_common.sh@936 -- # '[' -z 240906 ']' 00:19:59.506 14:56:51 -- common/autotest_common.sh@940 -- # kill -0 240906 00:19:59.506 14:56:51 -- common/autotest_common.sh@941 -- # uname 00:19:59.506 14:56:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.506 14:56:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 240906 00:19:59.506 14:56:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:59.506 14:56:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:59.506 14:56:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 240906' 00:19:59.506 killing process with pid 240906 00:19:59.506 14:56:51 -- common/autotest_common.sh@955 -- # kill 240906 00:19:59.506 14:56:51 -- common/autotest_common.sh@960 -- # wait 240906 00:19:59.506 14:56:52 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:19:59.506 14:56:52 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:19:59.507 [2024-04-26 14:55:20.403640] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:59.507 [2024-04-26 14:55:20.403799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240906 ] 00:19:59.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.507 [2024-04-26 14:55:20.532616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.507 [2024-04-26 14:55:20.765869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.507 Running I/O for 90 seconds... 00:19:59.507 [2024-04-26 14:55:26.638703] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:59.507 [2024-04-26 14:55:26.638785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.507 [2024-04-26 14:55:26.638813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.507 [2024-04-26 14:55:26.638845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.507 [2024-04-26 14:55:26.638866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.507 [2024-04-26 14:55:26.638889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.507 [2024-04-26 14:55:26.638926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.507 [2024-04-26 14:55:26.638975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.507 [2024-04-26 14:55:26.638998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.507 [2024-04-26 14:55:26.642314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.507 [2024-04-26 14:55:26.642352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:59.507 [2024-04-26 14:55:26.642435] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:59.507 [2024-04-26 14:55:26.648675] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.658703] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.668725] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.678758] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.688782] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.698818] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.708844] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.718884] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.728902] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.739546] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.750005] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.760039] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.770060] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.780095] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.790268] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.800304] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.810338] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.821149] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.831215] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.841232] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.851249] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.861281] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.871303] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.881337] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.891355] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.901390] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.911414] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.921890] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.932443] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.942461] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.953492] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.963526] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.973666] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.984952] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:26.994974] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.006145] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.016592] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.026608] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.036633] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.046662] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.057629] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.068326] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.078348] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.089619] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.099642] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.110045] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.121208] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.131242] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.142430] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.152702] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.162886] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.172915] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.182935] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.193797] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.204100] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.214143] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.225117] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.235150] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.245172] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.255304] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.265324] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.276148] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.286169] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.296341] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.307353] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.317388] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.328183] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.338679] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.348705] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.507 [2024-04-26 14:55:27.359540] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.369562] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.379598] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.390638] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.400672] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.411645] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.421903] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.431922] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.443134] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.453150] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.463468] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.473486] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.483518] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.494780] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.504810] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.514832] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.524865] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.534886] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.544920] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.554942] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.564976] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.575001] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.585371] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.596085] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.606138] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.616384] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.626442] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.636741] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.508 [2024-04-26 14:55:27.645373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.645960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.645984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.646973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.508 [2024-04-26 14:55:27.646994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.508 [2024-04-26 14:55:27.647025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.647965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.647988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.648915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.648948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.649022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.649116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.649194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.649232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.649285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.649358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.649419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.509 [2024-04-26 14:55:27.649455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.509 [2024-04-26 14:55:27.649503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.649689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.649767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.649840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.649929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.649967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.650935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.650995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.651034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.651118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.651227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.510 [2024-04-26 14:55:27.651312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000076ff000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007701000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007703000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007705000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007707000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007709000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.651926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770b000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.651963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770d000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770f000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007711000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007713000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007715000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007717000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007719000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771b000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771d000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771f000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.652945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007721000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.652990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.510 [2024-04-26 14:55:27.653044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007723000 len:0x1000 key:0x1860ef 00:19:59.510 [2024-04-26 14:55:27.653118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007725000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007727000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007729000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772b000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772d000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772f000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007731000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007733000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.653925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007735000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007737000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007739000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773b000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773d000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773f000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007741000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007743000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007745000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007747000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007749000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774b000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.654877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774d000 len:0x1000 key:0x1860ef 00:19:59.511 [2024-04-26 14:55:27.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.691015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.511 [2024-04-26 14:55:27.691050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.511 [2024-04-26 14:55:27.691093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:19:59.511 [2024-04-26 14:55:27.691117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.511 [2024-04-26 14:55:27.695690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:59.511 [2024-04-26 14:55:27.696211] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:59.511 [2024-04-26 14:55:27.696250] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.511 [2024-04-26 14:55:27.696271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:19:59.511 [2024-04-26 14:55:27.696315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.511 [2024-04-26 14:55:27.696342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:59.511 [2024-04-26 14:55:27.696378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:59.511 [2024-04-26 14:55:27.696405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:59.511 [2024-04-26 14:55:27.696431] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:59.511 [2024-04-26 14:55:27.696486] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.511 [2024-04-26 14:55:27.696516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:59.511 [2024-04-26 14:55:29.702019] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.511 [2024-04-26 14:55:29.702101] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:19:59.511 [2024-04-26 14:55:29.702196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.511 [2024-04-26 14:55:29.702240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:59.512 [2024-04-26 14:55:29.702280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:59.512 [2024-04-26 14:55:29.702309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:59.512 [2024-04-26 14:55:29.702334] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:59.512 [2024-04-26 14:55:29.702430] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.512 [2024-04-26 14:55:29.702460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:59.512 [2024-04-26 14:55:31.708816] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.512 [2024-04-26 14:55:31.708887] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:19:59.512 [2024-04-26 14:55:31.708963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.512 [2024-04-26 14:55:31.709005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:19:59.512 [2024-04-26 14:55:31.709044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:19:59.512 [2024-04-26 14:55:31.709068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:19:59.512 [2024-04-26 14:55:31.709095] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:19:59.512 [2024-04-26 14:55:31.709164] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.512 [2024-04-26 14:55:31.709202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:19:59.512 [2024-04-26 14:55:32.482304] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:59.512 [2024-04-26 14:55:32.482375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.512 [2024-04-26 14:55:32.482404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:0 sqhd:84e0 p:0 m:0 dnr:0 00:19:59.512 [2024-04-26 14:55:32.482441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.512 [2024-04-26 14:55:32.482463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:0 sqhd:84e0 p:0 m:0 dnr:0 00:19:59.512 [2024-04-26 14:55:32.482498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.512 [2024-04-26 14:55:32.482521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:0 sqhd:84e0 p:0 m:0 dnr:0 00:19:59.512 [2024-04-26 14:55:32.482545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.512 [2024-04-26 14:55:32.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32709 cdw0:0 sqhd:84e0 p:0 m:0 dnr:0 00:19:59.512 [2024-04-26 14:55:32.489835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.512 [2024-04-26 14:55:32.489884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:59.512 [2024-04-26 14:55:32.489973] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:19:59.512 [2024-04-26 14:55:32.492288] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.502319] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.512342] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.522377] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.532394] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.542442] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.552464] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.562507] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.572511] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.582542] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.592567] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.602600] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.612620] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.622649] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.632673] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.642705] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.652727] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.662760] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.672784] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.682821] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.692837] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.702867] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.715966] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.738287] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.748213] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.752087] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.512 [2024-04-26 14:55:32.758242] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.768276] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.778298] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.788330] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.798353] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.808384] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.818418] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.828458] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.838477] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.848512] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.858534] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.868565] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.878585] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.888621] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.898644] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.908678] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.918700] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.928733] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.938758] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.948788] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.958809] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.968839] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.978864] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.988895] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:32.998917] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.008953] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.018976] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.029010] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.039028] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.049059] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.059080] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.069109] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.079136] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.089160] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.099185] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.109219] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.119241] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.512 [2024-04-26 14:55:33.129270] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.139294] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.149323] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.159344] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.169380] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.179403] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.189435] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.199459] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.209493] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.219513] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.229550] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.239566] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.249598] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.259620] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.269650] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.279670] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.289700] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.299724] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.309759] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.319781] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.329817] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.339837] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.349863] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.359887] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.369914] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.379941] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.389967] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.399995] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.410024] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.420053] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.430078] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.440104] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.450138] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.460158] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.470188] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.480213] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.490239] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:59.513 [2024-04-26 14:55:33.492472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.492956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.492993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.513 [2024-04-26 14:55:33.493624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.513 [2024-04-26 14:55:33.493648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.493958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.493982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.494953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.494975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.514 [2024-04-26 14:55:33.495713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.514 [2024-04-26 14:55:33.495734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.495757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.495778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.495801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.495822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.495845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.495865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.495888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.495909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.495948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.495969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.496958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.496981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.515 [2024-04-26 14:55:33.497348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.515 [2024-04-26 14:55:33.497368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.497970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.497990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.516 [2024-04-26 14:55:33.498767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32709 cdw0:0 sqhd:d1e0 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.533789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.516 [2024-04-26 14:55:33.533820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.516 [2024-04-26 14:55:33.533840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47104 len:8 PRP1 0x0 PRP2 0x0 00:19:59.516 [2024-04-26 14:55:33.533860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.516 [2024-04-26 14:55:33.534060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:59.516 [2024-04-26 14:55:33.534523] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:19:59.516 [2024-04-26 14:55:33.534556] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.516 [2024-04-26 14:55:33.534574] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:19:59.516 [2024-04-26 14:55:33.534611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.516 [2024-04-26 14:55:33.534634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:59.516 [2024-04-26 14:55:33.534663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:59.516 [2024-04-26 14:55:33.534685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:59.516 [2024-04-26 14:55:33.534707] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:59.516 [2024-04-26 14:55:33.534755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.516 [2024-04-26 14:55:33.534779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:59.516 [2024-04-26 14:55:35.541103] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.516 [2024-04-26 14:55:35.541183] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:19:59.516 [2024-04-26 14:55:35.541240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.516 [2024-04-26 14:55:35.541266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:59.516 [2024-04-26 14:55:35.541303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:59.516 [2024-04-26 14:55:35.541328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:59.516 [2024-04-26 14:55:35.541358] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:59.516 [2024-04-26 14:55:35.542813] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.516 [2024-04-26 14:55:35.542849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:59.517 [2024-04-26 14:55:37.547915] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:19:59.517 [2024-04-26 14:55:37.547990] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:19:59.517 [2024-04-26 14:55:37.548048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:59.517 [2024-04-26 14:55:37.548076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:19:59.517 [2024-04-26 14:55:37.548162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:19:59.517 [2024-04-26 14:55:37.548193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:19:59.517 [2024-04-26 14:55:37.548218] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:19:59.517 [2024-04-26 14:55:37.548297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.517 [2024-04-26 14:55:37.548327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:19:59.517 [2024-04-26 14:55:38.606330] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.517 00:19:59.517 Latency(us) 00:19:59.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.517 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.517 Verification LBA range: start 0x0 length 0x8000 00:19:59.517 Nvme_mlx_0_0n1 : 90.02 7313.19 28.57 0.00 0.00 17476.10 3689.43 7108568.56 00:19:59.517 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.517 Verification LBA range: start 0x0 length 0x8000 00:19:59.517 Nvme_mlx_0_1n1 : 90.02 6987.48 27.29 0.00 0.00 18293.20 4271.98 7108568.56 00:19:59.517 =================================================================================================================== 00:19:59.517 Total : 14300.67 55.86 0.00 0.00 17875.34 3689.43 7108568.56 00:19:59.517 Received shutdown signal, test time was about 90.000000 seconds 00:19:59.517 00:19:59.517 Latency(us) 00:19:59.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.517 =================================================================================================================== 00:19:59.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.517 14:56:52 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:19:59.517 14:56:52 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:19:59.517 14:56:52 -- target/device_removal.sh@202 -- # killprocess 240720 00:19:59.517 14:56:52 -- common/autotest_common.sh@936 -- # '[' -z 240720 ']' 00:19:59.517 14:56:52 -- common/autotest_common.sh@940 -- # kill -0 240720 00:19:59.517 14:56:52 -- common/autotest_common.sh@941 -- # uname 00:19:59.517 14:56:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.517 14:56:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 240720 00:19:59.517 14:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.517 14:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.517 14:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 240720' 00:19:59.517 killing process with pid 240720 00:19:59.517 14:56:53 -- common/autotest_common.sh@955 -- # kill 240720 00:19:59.517 14:56:53 -- common/autotest_common.sh@960 -- # wait 240720 00:19:59.517 [2024-04-26 14:56:53.350394] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:59.517 14:56:55 -- target/device_removal.sh@203 -- # nvmfpid= 00:19:59.517 14:56:55 -- target/device_removal.sh@205 -- # return 0 00:19:59.517 00:19:59.517 real 1m36.757s 00:19:59.517 user 4m38.430s 00:19:59.517 sys 0m3.331s 00:19:59.517 14:56:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.517 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:19:59.517 ************************************ 00:19:59.517 END TEST nvmf_device_removal_pci_remove 00:19:59.517 ************************************ 00:19:59.517 14:56:55 -- target/device_removal.sh@317 -- # nvmftestfini 00:19:59.517 14:56:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:59.517 14:56:55 -- nvmf/common.sh@117 -- # sync 00:19:59.517 14:56:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@120 -- # set +e 00:19:59.517 14:56:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.517 14:56:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:59.517 rmmod nvme_rdma 00:19:59.517 rmmod nvme_fabrics 00:19:59.517 14:56:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.517 14:56:55 -- nvmf/common.sh@124 -- # set -e 00:19:59.517 14:56:55 -- nvmf/common.sh@125 -- # return 0 00:19:59.517 14:56:55 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:59.517 14:56:55 -- target/device_removal.sh@318 -- # clean_bond_device 00:19:59.517 14:56:55 -- target/device_removal.sh@240 -- # ip link 00:19:59.517 14:56:55 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:19:59.517 00:19:59.517 real 3m15.429s 00:19:59.517 user 9m16.694s 00:19:59.517 sys 0m8.011s 00:19:59.517 14:56:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.517 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:19:59.517 ************************************ 00:19:59.517 END TEST nvmf_device_removal 00:19:59.517 ************************************ 00:19:59.517 14:56:55 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:59.517 14:56:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.517 14:56:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.517 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:19:59.517 ************************************ 00:19:59.517 START TEST nvmf_srq_overwhelm 00:19:59.517 ************************************ 00:19:59.517 14:56:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:59.517 * Looking for test storage... 00:19:59.517 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:59.517 14:56:55 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.517 14:56:55 -- nvmf/common.sh@7 -- # uname -s 00:19:59.517 14:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.517 14:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.517 14:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.517 14:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.517 14:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.517 14:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.517 14:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.517 14:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.517 14:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.517 14:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.517 14:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:19:59.517 14:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:19:59.517 14:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.517 14:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.517 14:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.517 14:56:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.517 14:56:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:59.517 14:56:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.517 14:56:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.517 14:56:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.517 14:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.517 14:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.517 14:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.517 14:56:55 -- paths/export.sh@5 -- # export PATH 00:19:59.517 14:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.517 14:56:55 -- nvmf/common.sh@47 -- # : 0 00:19:59.517 14:56:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.517 14:56:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.517 14:56:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.517 14:56:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.517 14:56:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.517 14:56:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.517 14:56:55 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.517 14:56:55 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.518 14:56:55 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:19:59.518 14:56:55 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:19:59.518 14:56:55 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:59.518 14:56:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.518 14:56:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:59.518 14:56:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:59.518 14:56:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:59.518 14:56:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.518 14:56:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.518 14:56:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.518 14:56:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:59.518 14:56:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:59.518 14:56:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.518 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:19:59.518 14:56:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:59.518 14:56:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.518 14:56:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.518 14:56:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.518 14:56:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.518 14:56:57 -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.518 14:56:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@296 -- # e810=() 00:19:59.518 14:56:57 -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.518 14:56:57 -- nvmf/common.sh@297 -- # x722=() 00:19:59.518 14:56:57 -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.518 14:56:57 -- nvmf/common.sh@298 -- # mlx=() 00:19:59.518 14:56:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.518 14:56:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.518 14:56:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:19:59.518 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:19:59.518 14:56:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:59.518 14:56:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:19:59.518 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:19:59.518 14:56:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:59.518 14:56:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.518 14:56:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.518 14:56:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:19:59.518 Found net devices under 0000:09:00.0: mlx_0_0 00:19:59.518 14:56:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.518 14:56:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.518 14:56:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:19:59.518 Found net devices under 0000:09:00.1: mlx_0_1 00:19:59.518 14:56:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.518 14:56:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:59.518 14:56:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:59.518 14:56:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:59.518 14:56:57 -- nvmf/common.sh@58 -- # uname 00:19:59.518 14:56:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:59.518 14:56:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:59.518 14:56:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:59.518 14:56:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:59.518 14:56:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:59.518 14:56:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:59.518 14:56:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:59.518 14:56:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:59.518 14:56:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:59.518 14:56:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:59.518 14:56:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:59.518 14:56:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:59.518 14:56:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:59.518 14:56:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:59.518 14:56:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:59.518 14:56:57 -- nvmf/common.sh@105 -- # continue 2 00:19:59.518 14:56:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.518 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:59.518 14:56:57 -- nvmf/common.sh@105 -- # continue 2 00:19:59.518 14:56:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:59.518 14:56:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:59.518 14:56:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:59.518 14:56:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:59.518 14:56:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:59.518 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:59.518 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:19:59.518 altname enp9s0f0np0 00:19:59.518 inet 192.168.100.8/24 scope global mlx_0_0 00:19:59.518 valid_lft forever preferred_lft forever 00:19:59.518 14:56:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:59.518 14:56:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:59.518 14:56:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:59.518 14:56:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:59.518 14:56:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:59.518 14:56:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:59.518 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:59.518 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:19:59.518 altname enp9s0f1np1 00:19:59.518 inet 192.168.100.9/24 scope global mlx_0_1 00:19:59.518 valid_lft forever preferred_lft forever 00:19:59.518 14:56:57 -- nvmf/common.sh@411 -- # return 0 00:19:59.518 14:56:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:59.518 14:56:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:59.518 14:56:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:59.518 14:56:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:59.518 14:56:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:59.518 14:56:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:59.518 14:56:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:59.518 14:56:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:59.518 14:56:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:59.519 14:56:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:59.519 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.519 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:59.519 14:56:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:59.519 14:56:57 -- nvmf/common.sh@105 -- # continue 2 00:19:59.519 14:56:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:59.519 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.519 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:59.519 14:56:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:59.519 14:56:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:59.519 14:56:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:59.519 14:56:57 -- nvmf/common.sh@105 -- # continue 2 00:19:59.519 14:56:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:59.519 14:56:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:59.519 14:56:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:59.519 14:56:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:59.519 14:56:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:59.519 14:56:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:59.519 14:56:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:59.519 14:56:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:59.519 192.168.100.9' 00:19:59.519 14:56:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:59.519 192.168.100.9' 00:19:59.519 14:56:57 -- nvmf/common.sh@446 -- # head -n 1 00:19:59.519 14:56:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:59.519 14:56:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:59.519 192.168.100.9' 00:19:59.519 14:56:57 -- nvmf/common.sh@447 -- # tail -n +2 00:19:59.519 14:56:57 -- nvmf/common.sh@447 -- # head -n 1 00:19:59.519 14:56:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:59.519 14:56:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:59.519 14:56:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:59.519 14:56:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:59.519 14:56:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:59.519 14:56:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:59.519 14:56:57 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:19:59.519 14:56:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:59.519 14:56:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:59.519 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 14:56:57 -- nvmf/common.sh@470 -- # nvmfpid=253840 00:19:59.519 14:56:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:59.519 14:56:57 -- nvmf/common.sh@471 -- # waitforlisten 253840 00:19:59.519 14:56:57 -- common/autotest_common.sh@817 -- # '[' -z 253840 ']' 00:19:59.519 14:56:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.519 14:56:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:59.519 14:56:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.519 14:56:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:59.519 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 [2024-04-26 14:56:57.606121] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:59.519 [2024-04-26 14:56:57.606254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.519 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.519 [2024-04-26 14:56:57.737470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.519 [2024-04-26 14:56:57.993645] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.519 [2024-04-26 14:56:57.993726] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.519 [2024-04-26 14:56:57.993754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.519 [2024-04-26 14:56:57.993777] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.519 [2024-04-26 14:56:57.993796] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.519 [2024-04-26 14:56:57.993938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.519 [2024-04-26 14:56:57.994013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.519 [2024-04-26 14:56:57.994099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.519 [2024-04-26 14:56:57.994106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.519 14:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:59.519 14:56:58 -- common/autotest_common.sh@850 -- # return 0 00:19:59.519 14:56:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:59.519 14:56:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 14:56:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:19:59.519 14:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 [2024-04-26 14:56:58.587210] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f2a928a0940) succeed. 00:19:59.519 [2024-04-26 14:56:58.598137] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f2a92859940) succeed. 00:19:59.519 14:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:19:59.519 14:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 14:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:59.519 14:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 Malloc0 00:19:59.519 14:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:59.519 14:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 14:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:59.519 14:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.519 14:56:58 -- common/autotest_common.sh@10 -- # set +x 00:19:59.519 [2024-04-26 14:56:58.788929] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:59.519 14:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.519 14:56:58 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:20:02.806 14:57:02 -- common/autotest_common.sh@1221 -- # local i=0 00:20:02.806 14:57:02 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:02.806 14:57:02 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:20:02.806 14:57:02 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:02.806 14:57:02 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:20:02.806 14:57:02 -- common/autotest_common.sh@1232 -- # return 0 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.806 14:57:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.806 14:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 14:57:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:02.806 14:57:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.806 14:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 Malloc1 00:20:02.806 14:57:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:02.806 14:57:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.806 14:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 14:57:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:02.806 14:57:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.806 14:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:02.806 14:57:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.806 14:57:02 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:20:06.105 14:57:06 -- common/autotest_common.sh@1221 -- # local i=0 00:20:06.105 14:57:06 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:06.105 14:57:06 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:20:06.105 14:57:06 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:06.105 14:57:06 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:20:06.105 14:57:06 -- common/autotest_common.sh@1232 -- # return 0 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:06.105 14:57:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.105 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 14:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:06.105 14:57:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.105 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 Malloc2 00:20:06.105 14:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:06.105 14:57:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.105 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 14:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:06.105 14:57:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.105 14:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:06.105 14:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.105 14:57:06 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:20:10.299 14:57:09 -- common/autotest_common.sh@1221 -- # local i=0 00:20:10.299 14:57:09 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:10.299 14:57:09 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:20:10.299 14:57:09 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:10.299 14:57:09 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:20:10.299 14:57:09 -- common/autotest_common.sh@1232 -- # return 0 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:10.299 14:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.299 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.299 14:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:10.299 14:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.299 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.299 Malloc3 00:20:10.299 14:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:10.299 14:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.299 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.299 14:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:10.299 14:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.299 14:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:10.299 14:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.299 14:57:09 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:20:13.585 14:57:13 -- common/autotest_common.sh@1221 -- # local i=0 00:20:13.585 14:57:13 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:13.585 14:57:13 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:20:13.585 14:57:13 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:13.585 14:57:13 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:20:13.585 14:57:13 -- common/autotest_common.sh@1232 -- # return 0 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:13.585 14:57:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.585 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:13.585 14:57:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:13.585 14:57:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.585 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:13.585 Malloc4 00:20:13.585 14:57:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:13.585 14:57:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.585 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:13.585 14:57:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:13.585 14:57:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.585 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:13.585 14:57:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.585 14:57:13 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:16.872 14:57:16 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:20:16.872 14:57:16 -- common/autotest_common.sh@1221 -- # local i=0 00:20:16.872 14:57:16 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:16.872 14:57:16 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:20:16.872 14:57:16 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:16.872 14:57:16 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:20:16.872 14:57:16 -- common/autotest_common.sh@1232 -- # return 0 00:20:16.872 14:57:16 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:16.872 14:57:16 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:20:16.872 14:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.872 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.872 14:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.872 14:57:16 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:16.872 14:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.872 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.872 Malloc5 00:20:16.872 14:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.872 14:57:16 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:16.873 14:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.873 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.873 14:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.873 14:57:16 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:16.873 14:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.873 14:57:16 -- common/autotest_common.sh@10 -- # set +x 00:20:16.873 14:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.873 14:57:16 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:21.059 14:57:20 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:20:21.059 14:57:20 -- common/autotest_common.sh@1221 -- # local i=0 00:20:21.059 14:57:20 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:20:21.059 14:57:20 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:20:21.059 14:57:20 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:20:21.059 14:57:20 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:20:21.059 14:57:20 -- common/autotest_common.sh@1232 -- # return 0 00:20:21.059 14:57:20 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:20:21.059 [global] 00:20:21.059 thread=1 00:20:21.059 invalidate=1 00:20:21.059 rw=read 00:20:21.059 time_based=1 00:20:21.059 runtime=10 00:20:21.059 ioengine=libaio 00:20:21.059 direct=1 00:20:21.059 bs=1048576 00:20:21.059 iodepth=128 00:20:21.059 norandommap=1 00:20:21.059 numjobs=13 00:20:21.059 00:20:21.059 [job0] 00:20:21.059 filename=/dev/nvme0n1 00:20:21.059 [job1] 00:20:21.059 filename=/dev/nvme1n1 00:20:21.059 [job2] 00:20:21.059 filename=/dev/nvme2n1 00:20:21.059 [job3] 00:20:21.059 filename=/dev/nvme3n1 00:20:21.059 [job4] 00:20:21.059 filename=/dev/nvme4n1 00:20:21.059 [job5] 00:20:21.059 filename=/dev/nvme5n1 00:20:21.059 Could not set queue depth (nvme0n1) 00:20:21.059 Could not set queue depth (nvme1n1) 00:20:21.059 Could not set queue depth (nvme2n1) 00:20:21.059 Could not set queue depth (nvme3n1) 00:20:21.059 Could not set queue depth (nvme4n1) 00:20:21.059 Could not set queue depth (nvme5n1) 00:20:21.059 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:21.059 ... 00:20:21.059 fio-3.35 00:20:21.059 Starting 78 threads 00:20:35.936 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257473: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=3, BW=3188KiB/s (3265kB/s)(38.0MiB/12205msec) 00:20:35.936 slat (usec): min=502, max=6381.0k, avg=319993.45, stdev=1126064.72 00:20:35.936 clat (msec): min=44, max=12203, avg=10802.11, stdev=2489.78 00:20:35.936 lat (msec): min=6425, max=12204, avg=11122.11, stdev=1737.58 00:20:35.936 clat percentiles (msec): 00:20:35.936 | 1.00th=[ 45], 5.00th=[ 6409], 10.00th=[ 8490], 20.00th=[ 8557], 00:20:35.936 | 30.00th=[10805], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:20:35.936 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.936 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.936 | 99.99th=[12147] 00:20:35.936 lat (msec) : 50=2.63%, >=2000=97.37% 00:20:35.936 cpu : usr=0.00%, sys=0.25%, ctx=62, majf=0, minf=9729 00:20:35.936 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:20:35.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.936 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.936 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257474: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=14, BW=14.9MiB/s (15.6MB/s)(181MiB/12171msec) 00:20:35.936 slat (usec): min=63, max=6443.1k, avg=55473.77, stdev=503017.23 00:20:35.936 clat (msec): min=718, max=11992, avg=8117.37, stdev=4504.25 00:20:35.936 lat (msec): min=719, max=12006, avg=8172.85, stdev=4486.19 00:20:35.936 clat percentiles (msec): 00:20:35.936 | 1.00th=[ 718], 5.00th=[ 726], 10.00th=[ 818], 20.00th=[ 1083], 00:20:35.936 | 30.00th=[ 8557], 40.00th=[10939], 50.00th=[11073], 60.00th=[11073], 00:20:35.936 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11476], 00:20:35.936 | 99.00th=[11879], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.936 | 99.99th=[12013] 00:20:35.936 bw ( KiB/s): min= 1965, max=86016, per=1.80%, avg=36836.33, stdev=43814.26, samples=3 00:20:35.936 iops : min= 1, max= 84, avg=35.67, stdev=43.15, samples=3 00:20:35.936 lat (msec) : 750=8.29%, 1000=6.63%, 2000=8.29%, >=2000=76.80% 00:20:35.936 cpu : usr=0.02%, sys=0.62%, ctx=141, majf=0, minf=32769 00:20:35.936 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.8%, 32=17.7%, >=64=65.2% 00:20:35.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.936 complete : 0=0.0%, 4=98.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.8% 00:20:35.936 issued rwts: total=181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257475: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=41, BW=41.3MiB/s (43.3MB/s)(500MiB/12097msec) 00:20:35.936 slat (usec): min=46, max=2001.6k, avg=19992.65, stdev=161128.19 00:20:35.936 clat (msec): min=428, max=6157, avg=2224.98, stdev=1941.01 00:20:35.936 lat (msec): min=432, max=6160, avg=2244.97, stdev=1948.38 00:20:35.936 clat percentiles (msec): 00:20:35.936 | 1.00th=[ 435], 5.00th=[ 485], 10.00th=[ 567], 20.00th=[ 609], 00:20:35.936 | 30.00th=[ 718], 40.00th=[ 802], 50.00th=[ 869], 60.00th=[ 2601], 00:20:35.936 | 70.00th=[ 4329], 80.00th=[ 4530], 90.00th=[ 4732], 95.00th=[ 5671], 00:20:35.936 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:20:35.936 | 99.99th=[ 6141] 00:20:35.936 bw ( KiB/s): min=12288, max=231424, per=6.22%, avg=127317.33, stdev=82261.47, samples=6 00:20:35.936 iops : min= 12, max= 226, avg=124.33, stdev=80.33, samples=6 00:20:35.936 lat (msec) : 500=5.60%, 750=29.80%, 1000=23.20%, >=2000=41.40% 00:20:35.936 cpu : usr=0.05%, sys=1.07%, ctx=285, majf=0, minf=32769 00:20:35.936 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:20:35.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.936 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.936 issued rwts: total=500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257476: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=2, BW=2897KiB/s (2966kB/s)(40.0MiB/14140msec) 00:20:35.936 slat (usec): min=451, max=4285.1k, avg=250560.90, stdev=953496.84 00:20:35.936 clat (msec): min=4117, max=14137, avg=13051.75, stdev=2438.08 00:20:35.936 lat (msec): min=4141, max=14139, avg=13302.31, stdev=1965.54 00:20:35.936 clat percentiles (msec): 00:20:35.936 | 1.00th=[ 4111], 5.00th=[ 4144], 10.00th=[ 8423], 20.00th=[12818], 00:20:35.936 | 30.00th=[13892], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:20:35.936 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:20:35.936 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:20:35.936 | 99.99th=[14160] 00:20:35.936 lat (msec) : >=2000=100.00% 00:20:35.936 cpu : usr=0.00%, sys=0.15%, ctx=45, majf=0, minf=10241 00:20:35.936 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:20:35.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.936 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.936 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257477: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=4, BW=4555KiB/s (4664kB/s)(63.0MiB/14164msec) 00:20:35.936 slat (usec): min=431, max=4273.9k, avg=159183.81, stdev=757828.89 00:20:35.936 clat (msec): min=4134, max=14160, avg=13172.53, stdev=1483.42 00:20:35.936 lat (msec): min=8408, max=14163, avg=13331.72, stdev=934.42 00:20:35.936 clat percentiles (msec): 00:20:35.936 | 1.00th=[ 4144], 5.00th=[12550], 10.00th=[12550], 20.00th=[12684], 00:20:35.936 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[14026], 00:20:35.936 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:20:35.936 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:20:35.936 | 99.99th=[14160] 00:20:35.936 lat (msec) : >=2000=100.00% 00:20:35.936 cpu : usr=0.00%, sys=0.28%, ctx=59, majf=0, minf=16129 00:20:35.936 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:20:35.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.936 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.936 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.936 job0: (groupid=0, jobs=1): err= 0: pid=257478: Fri Apr 26 14:57:35 2024 00:20:35.936 read: IOPS=7, BW=7547KiB/s (7728kB/s)(89.0MiB/12076msec) 00:20:35.936 slat (usec): min=417, max=2070.6k, avg=135107.92, stdev=478202.55 00:20:35.936 clat (msec): min=49, max=12072, avg=7304.38, stdev=3467.69 00:20:35.936 lat (msec): min=2088, max=12074, avg=7439.49, stdev=3415.69 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 51], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4144], 00:20:35.937 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8423], 60.00th=[ 8557], 00:20:35.937 | 70.00th=[ 8658], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:20:35.937 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.937 | 99.99th=[12013] 00:20:35.937 lat (msec) : 50=1.12%, >=2000=98.88% 00:20:35.937 cpu : usr=0.00%, sys=0.37%, ctx=86, majf=0, minf=22785 00:20:35.937 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.937 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257479: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=3, BW=3377KiB/s (3458kB/s)(40.0MiB/12128msec) 00:20:35.937 slat (usec): min=449, max=4258.6k, avg=250224.70, stdev=822904.61 00:20:35.937 clat (msec): min=2118, max=12126, avg=9811.61, stdev=3263.07 00:20:35.937 lat (msec): min=2136, max=12127, avg=10061.84, stdev=3033.69 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 2140], 20.00th=[ 8557], 00:20:35.937 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:20:35.937 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.937 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.937 | 99.99th=[12147] 00:20:35.937 lat (msec) : >=2000=100.00% 00:20:35.937 cpu : usr=0.01%, sys=0.16%, ctx=50, majf=0, minf=10241 00:20:35.937 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.937 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257480: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=2, BW=2462KiB/s (2521kB/s)(34.0MiB/14142msec) 00:20:35.937 slat (usec): min=606, max=4265.4k, avg=294928.37, stdev=1010675.68 00:20:35.937 clat (msec): min=4113, max=14138, avg=12690.44, stdev=2407.15 00:20:35.937 lat (msec): min=4147, max=14141, avg=12985.37, stdev=1881.33 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 4111], 5.00th=[ 4144], 10.00th=[12550], 20.00th=[12550], 00:20:35.937 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[13892], 00:20:35.937 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14160], 00:20:35.937 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:20:35.937 | 99.99th=[14160] 00:20:35.937 lat (msec) : >=2000=100.00% 00:20:35.937 cpu : usr=0.00%, sys=0.19%, ctx=58, majf=0, minf=8705 00:20:35.937 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.937 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257481: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=3, BW=3748KiB/s (3838kB/s)(52.0MiB/14207msec) 00:20:35.937 slat (usec): min=537, max=4284.5k, avg=193881.27, stdev=832236.34 00:20:35.937 clat (msec): min=4124, max=14205, avg=13318.18, stdev=1782.26 00:20:35.937 lat (msec): min=8408, max=14206, avg=13512.06, stdev=1223.17 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 4111], 5.00th=[ 8423], 10.00th=[12550], 20.00th=[12818], 00:20:35.937 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14160], 60.00th=[14160], 00:20:35.937 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:20:35.937 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:20:35.937 | 99.99th=[14160] 00:20:35.937 lat (msec) : >=2000=100.00% 00:20:35.937 cpu : usr=0.00%, sys=0.24%, ctx=69, majf=0, minf=13313 00:20:35.937 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.937 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257482: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=19, BW=19.4MiB/s (20.3MB/s)(235MiB/12126msec) 00:20:35.937 slat (usec): min=49, max=4273.3k, avg=51365.49, stdev=364360.37 00:20:35.937 clat (msec): min=53, max=11371, avg=6300.64, stdev=5082.21 00:20:35.937 lat (msec): min=540, max=11372, avg=6352.01, stdev=5072.93 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 542], 5.00th=[ 609], 10.00th=[ 676], 20.00th=[ 709], 00:20:35.937 | 30.00th=[ 743], 40.00th=[ 852], 50.00th=[10939], 60.00th=[10939], 00:20:35.937 | 70.00th=[11073], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:20:35.937 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:20:35.937 | 99.99th=[11342] 00:20:35.937 bw ( KiB/s): min= 2048, max=161792, per=2.14%, avg=43827.20, stdev=68303.18, samples=5 00:20:35.937 iops : min= 2, max= 158, avg=42.80, stdev=66.70, samples=5 00:20:35.937 lat (msec) : 100=0.43%, 750=31.91%, 1000=11.06%, >=2000=56.60% 00:20:35.937 cpu : usr=0.06%, sys=0.78%, ctx=146, majf=0, minf=32769 00:20:35.937 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:20:35.937 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257483: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=16, BW=16.6MiB/s (17.5MB/s)(235MiB/14121msec) 00:20:35.937 slat (usec): min=53, max=3729.4k, avg=51521.35, stdev=320545.98 00:20:35.937 clat (msec): min=1024, max=13977, avg=5796.00, stdev=1683.71 00:20:35.937 lat (msec): min=1091, max=13978, avg=5847.52, stdev=1661.04 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 1099], 5.00th=[ 2903], 10.00th=[ 3071], 20.00th=[ 5336], 00:20:35.937 | 30.00th=[ 5537], 40.00th=[ 5738], 50.00th=[ 6007], 60.00th=[ 6141], 00:20:35.937 | 70.00th=[ 6611], 80.00th=[ 6678], 90.00th=[ 6745], 95.00th=[ 7886], 00:20:35.937 | 99.00th=[12818], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:20:35.937 | 99.99th=[14026] 00:20:35.937 bw ( KiB/s): min= 2048, max=102400, per=2.16%, avg=44237.60, stdev=53259.02, samples=5 00:20:35.937 iops : min= 2, max= 100, avg=43.20, stdev=52.01, samples=5 00:20:35.937 lat (msec) : 2000=2.98%, >=2000=97.02% 00:20:35.937 cpu : usr=0.00%, sys=0.72%, ctx=163, majf=0, minf=32769 00:20:35.937 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:20:35.937 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257484: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=0, BW=761KiB/s (779kB/s)(9216KiB/12117msec) 00:20:35.937 slat (msec): min=6, max=6507, avg=1342.69, stdev=2129.01 00:20:35.937 clat (msec): min=32, max=11950, avg=8261.03, stdev=4238.91 00:20:35.937 lat (msec): min=2129, max=12116, avg=9603.72, stdev=3055.11 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 2123], 00:20:35.937 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[10671], 00:20:35.937 | 70.00th=[10805], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:20:35.937 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.937 | 99.99th=[12013] 00:20:35.937 lat (msec) : 50=11.11%, >=2000=88.89% 00:20:35.937 cpu : usr=0.00%, sys=0.05%, ctx=23, majf=0, minf=2305 00:20:35.937 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job0: (groupid=0, jobs=1): err= 0: pid=257485: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=1, BW=1521KiB/s (1558kB/s)(21.0MiB/14136msec) 00:20:35.937 slat (usec): min=533, max=4294.3k, avg=576958.87, stdev=1337645.95 00:20:35.937 clat (msec): min=2019, max=14134, avg=12324.58, stdev=3346.37 00:20:35.937 lat (msec): min=4142, max=14135, avg=12901.54, stdev=2387.98 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 2022], 5.00th=[ 4144], 10.00th=[ 8423], 20.00th=[12684], 00:20:35.937 | 30.00th=[12818], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:20:35.937 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:20:35.937 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:20:35.937 | 99.99th=[14160] 00:20:35.937 lat (msec) : >=2000=100.00% 00:20:35.937 cpu : usr=0.00%, sys=0.10%, ctx=41, majf=0, minf=5377 00:20:35.937 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:20:35.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.937 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.937 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.937 job1: (groupid=0, jobs=1): err= 0: pid=257486: Fri Apr 26 14:57:35 2024 00:20:35.937 read: IOPS=23, BW=23.3MiB/s (24.4MB/s)(283MiB/12170msec) 00:20:35.937 slat (usec): min=57, max=2214.2k, avg=35660.50, stdev=226256.23 00:20:35.937 clat (msec): min=947, max=7476, avg=3878.17, stdev=2721.74 00:20:35.937 lat (msec): min=1012, max=7485, avg=3913.83, stdev=2718.89 00:20:35.937 clat percentiles (msec): 00:20:35.937 | 1.00th=[ 1028], 5.00th=[ 1070], 10.00th=[ 1099], 20.00th=[ 1116], 00:20:35.937 | 30.00th=[ 1150], 40.00th=[ 1183], 50.00th=[ 3406], 60.00th=[ 6477], 00:20:35.937 | 70.00th=[ 6745], 80.00th=[ 7013], 90.00th=[ 7148], 95.00th=[ 7282], 00:20:35.937 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:20:35.937 | 99.99th=[ 7483] 00:20:35.937 bw ( KiB/s): min= 1928, max=147456, per=2.23%, avg=45624.00, stdev=52679.97, samples=7 00:20:35.937 iops : min= 1, max= 144, avg=44.43, stdev=51.57, samples=7 00:20:35.938 lat (msec) : 1000=0.35%, 2000=43.82%, >=2000=55.83% 00:20:35.938 cpu : usr=0.01%, sys=0.84%, ctx=267, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:20:35.938 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257487: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=73, BW=73.0MiB/s (76.6MB/s)(888MiB/12162msec) 00:20:35.938 slat (usec): min=51, max=2165.4k, avg=11263.67, stdev=129560.57 00:20:35.938 clat (msec): min=175, max=6890, avg=1324.31, stdev=2261.32 00:20:35.938 lat (msec): min=177, max=6891, avg=1335.58, stdev=2269.51 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 180], 20.00th=[ 207], 00:20:35.938 | 30.00th=[ 211], 40.00th=[ 300], 50.00th=[ 351], 60.00th=[ 355], 00:20:35.938 | 70.00th=[ 359], 80.00th=[ 430], 90.00th=[ 6678], 95.00th=[ 6745], 00:20:35.938 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:20:35.938 | 99.99th=[ 6879] 00:20:35.938 bw ( KiB/s): min= 1996, max=636928, per=9.52%, avg=194809.50, stdev=234863.54, samples=8 00:20:35.938 iops : min= 1, max= 622, avg=190.12, stdev=229.47, samples=8 00:20:35.938 lat (msec) : 250=34.91%, 500=45.83%, >=2000=19.26% 00:20:35.938 cpu : usr=0.04%, sys=1.40%, ctx=743, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.938 issued rwts: total=888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257488: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=2, BW=2947KiB/s (3018kB/s)(35.0MiB/12161msec) 00:20:35.938 slat (usec): min=537, max=2238.0k, avg=285817.59, stdev=702517.41 00:20:35.938 clat (msec): min=2156, max=12158, avg=10400.95, stdev=2714.93 00:20:35.938 lat (msec): min=4284, max=12160, avg=10686.76, stdev=2319.22 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 6342], 20.00th=[ 6342], 00:20:35.938 | 30.00th=[10805], 40.00th=[10805], 50.00th=[12013], 60.00th=[12147], 00:20:35.938 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.938 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.938 | 99.99th=[12147] 00:20:35.938 lat (msec) : >=2000=100.00% 00:20:35.938 cpu : usr=0.00%, sys=0.19%, ctx=57, majf=0, minf=8961 00:20:35.938 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.938 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257489: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=61, BW=61.2MiB/s (64.2MB/s)(744MiB/12153msec) 00:20:35.938 slat (usec): min=43, max=2078.6k, avg=13564.37, stdev=108101.74 00:20:35.938 clat (msec): min=321, max=6449, avg=1984.08, stdev=1737.22 00:20:35.938 lat (msec): min=321, max=8527, avg=1997.65, stdev=1749.23 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 342], 5.00th=[ 384], 10.00th=[ 409], 20.00th=[ 659], 00:20:35.938 | 30.00th=[ 835], 40.00th=[ 885], 50.00th=[ 969], 60.00th=[ 1854], 00:20:35.938 | 70.00th=[ 2702], 80.00th=[ 3608], 90.00th=[ 5201], 95.00th=[ 5470], 00:20:35.938 | 99.00th=[ 5738], 99.50th=[ 5805], 99.90th=[ 6477], 99.95th=[ 6477], 00:20:35.938 | 99.99th=[ 6477] 00:20:35.938 bw ( KiB/s): min= 2019, max=165888, per=4.41%, avg=90256.21, stdev=56579.76, samples=14 00:20:35.938 iops : min= 1, max= 162, avg=88.07, stdev=55.37, samples=14 00:20:35.938 lat (msec) : 500=18.68%, 750=3.76%, 1000=28.90%, 2000=12.37%, >=2000=36.29% 00:20:35.938 cpu : usr=0.02%, sys=1.09%, ctx=655, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.938 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257490: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=3, BW=3963KiB/s (4058kB/s)(47.0MiB/12144msec) 00:20:35.938 slat (usec): min=441, max=2120.2k, avg=212774.25, stdev=588198.57 00:20:35.938 clat (msec): min=2142, max=12036, avg=9607.41, stdev=2949.38 00:20:35.938 lat (msec): min=2156, max=12143, avg=9820.19, stdev=2753.37 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 8490], 00:20:35.938 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[10805], 00:20:35.938 | 70.00th=[10805], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:20:35.938 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.938 | 99.99th=[12013] 00:20:35.938 lat (msec) : >=2000=100.00% 00:20:35.938 cpu : usr=0.00%, sys=0.26%, ctx=85, majf=0, minf=12033 00:20:35.938 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.938 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257491: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(262MiB/10156msec) 00:20:35.938 slat (usec): min=44, max=2144.9k, avg=38368.87, stdev=217066.92 00:20:35.938 clat (msec): min=101, max=8490, avg=2407.21, stdev=2281.90 00:20:35.938 lat (msec): min=242, max=8491, avg=2445.58, stdev=2307.68 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 243], 5.00th=[ 243], 10.00th=[ 1028], 20.00th=[ 1116], 00:20:35.938 | 30.00th=[ 1234], 40.00th=[ 1267], 50.00th=[ 1536], 60.00th=[ 1687], 00:20:35.938 | 70.00th=[ 2769], 80.00th=[ 2769], 90.00th=[ 8020], 95.00th=[ 8221], 00:20:35.938 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:20:35.938 | 99.99th=[ 8490] 00:20:35.938 bw ( KiB/s): min=32768, max=114688, per=3.35%, avg=68608.00, stdev=34976.26, samples=4 00:20:35.938 iops : min= 32, max= 112, avg=67.00, stdev=34.16, samples=4 00:20:35.938 lat (msec) : 250=6.49%, 1000=2.29%, 2000=60.69%, >=2000=30.53% 00:20:35.938 cpu : usr=0.03%, sys=0.95%, ctx=245, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.2%, >=64=76.0% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:20:35.938 issued rwts: total=262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257492: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=29, BW=29.3MiB/s (30.8MB/s)(357MiB/12171msec) 00:20:35.938 slat (usec): min=47, max=2118.0k, avg=28043.47, stdev=192024.18 00:20:35.938 clat (msec): min=727, max=10551, avg=4150.05, stdev=3063.71 00:20:35.938 lat (msec): min=730, max=10551, avg=4178.10, stdev=3077.53 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 726], 5.00th=[ 743], 10.00th=[ 760], 20.00th=[ 827], 00:20:35.938 | 30.00th=[ 936], 40.00th=[ 3071], 50.00th=[ 3306], 60.00th=[ 4178], 00:20:35.938 | 70.00th=[ 6477], 80.00th=[ 7684], 90.00th=[ 8221], 95.00th=[ 8490], 00:20:35.938 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:35.938 | 99.99th=[10537] 00:20:35.938 bw ( KiB/s): min= 1946, max=145408, per=2.56%, avg=52326.44, stdev=44826.59, samples=9 00:20:35.938 iops : min= 1, max= 142, avg=51.00, stdev=43.90, samples=9 00:20:35.938 lat (msec) : 750=6.44%, 1000=26.33%, >=2000=67.23% 00:20:35.938 cpu : usr=0.01%, sys=0.81%, ctx=307, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.4% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.938 issued rwts: total=357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257493: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=35, BW=35.0MiB/s (36.7MB/s)(425MiB/12129msec) 00:20:35.938 slat (usec): min=46, max=1970.8k, avg=23565.82, stdev=158521.61 00:20:35.938 clat (msec): min=850, max=8718, avg=3287.12, stdev=2361.16 00:20:35.938 lat (msec): min=853, max=8718, avg=3310.68, stdev=2370.82 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 902], 5.00th=[ 919], 10.00th=[ 944], 20.00th=[ 1003], 00:20:35.938 | 30.00th=[ 1062], 40.00th=[ 2089], 50.00th=[ 2802], 60.00th=[ 3876], 00:20:35.938 | 70.00th=[ 4279], 80.00th=[ 5067], 90.00th=[ 6409], 95.00th=[ 8490], 00:20:35.938 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:35.938 | 99.99th=[ 8658] 00:20:35.938 bw ( KiB/s): min= 1878, max=143360, per=2.98%, avg=61013.40, stdev=53048.02, samples=10 00:20:35.938 iops : min= 1, max= 140, avg=59.50, stdev=51.91, samples=10 00:20:35.938 lat (msec) : 1000=19.06%, 2000=17.18%, >=2000=63.76% 00:20:35.938 cpu : usr=0.02%, sys=1.15%, ctx=351, majf=0, minf=32769 00:20:35.938 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:20:35.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.938 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.938 issued rwts: total=425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.938 job1: (groupid=0, jobs=1): err= 0: pid=257494: Fri Apr 26 14:57:35 2024 00:20:35.938 read: IOPS=11, BW=11.4MiB/s (12.0MB/s)(138MiB/12085msec) 00:20:35.938 slat (usec): min=443, max=2153.4k, avg=72520.45, stdev=334983.47 00:20:35.938 clat (msec): min=2075, max=11862, avg=9607.84, stdev=2495.55 00:20:35.938 lat (msec): min=2091, max=11866, avg=9680.36, stdev=2417.27 00:20:35.938 clat percentiles (msec): 00:20:35.938 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 6342], 20.00th=[ 8221], 00:20:35.938 | 30.00th=[ 8423], 40.00th=[ 9866], 50.00th=[10939], 60.00th=[11073], 00:20:35.938 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:20:35.938 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:20:35.938 | 99.99th=[11879] 00:20:35.939 bw ( KiB/s): min=10240, max=12288, per=0.55%, avg=11264.00, stdev=1448.15, samples=2 00:20:35.939 iops : min= 10, max= 12, avg=11.00, stdev= 1.41, samples=2 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.939 cpu : usr=0.02%, sys=0.65%, ctx=258, majf=0, minf=32769 00:20:35.939 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.6%, 32=23.2%, >=64=54.3% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=91.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=8.3% 00:20:35.939 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job1: (groupid=0, jobs=1): err= 0: pid=257495: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(152MiB/12098msec) 00:20:35.939 slat (usec): min=73, max=2162.0k, avg=65899.16, stdev=326587.46 00:20:35.939 clat (msec): min=2080, max=11851, avg=9448.80, stdev=2831.20 00:20:35.939 lat (msec): min=2104, max=11859, avg=9514.70, stdev=2771.04 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 2106], 5.00th=[ 3540], 10.00th=[ 4245], 20.00th=[ 6409], 00:20:35.939 | 30.00th=[ 8557], 40.00th=[10805], 50.00th=[10939], 60.00th=[11208], 00:20:35.939 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:20:35.939 | 99.00th=[11745], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:20:35.939 | 99.99th=[11879] 00:20:35.939 bw ( KiB/s): min= 1372, max=16384, per=0.49%, avg=10104.80, stdev=5676.30, samples=5 00:20:35.939 iops : min= 1, max= 16, avg= 9.80, stdev= 5.67, samples=5 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.939 cpu : usr=0.01%, sys=0.65%, ctx=240, majf=0, minf=32769 00:20:35.939 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.5%, 32=21.1%, >=64=58.6% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=96.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.8% 00:20:35.939 issued rwts: total=152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job1: (groupid=0, jobs=1): err= 0: pid=257496: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=5, BW=5473KiB/s (5605kB/s)(65.0MiB/12161msec) 00:20:35.939 slat (usec): min=490, max=2219.7k, avg=153970.37, stdev=516400.74 00:20:35.939 clat (msec): min=2151, max=12158, avg=8856.64, stdev=2944.86 00:20:35.939 lat (msec): min=4127, max=12160, avg=9010.61, stdev=2848.91 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 2165], 5.00th=[ 4144], 10.00th=[ 4279], 20.00th=[ 4329], 00:20:35.939 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8423], 60.00th=[ 8423], 00:20:35.939 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.939 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.939 | 99.99th=[12147] 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.939 cpu : usr=0.00%, sys=0.40%, ctx=104, majf=0, minf=16641 00:20:35.939 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.939 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job1: (groupid=0, jobs=1): err= 0: pid=257497: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=100, BW=100MiB/s (105MB/s)(1019MiB/10164msec) 00:20:35.939 slat (usec): min=42, max=1132.5k, avg=9839.35, stdev=43000.44 00:20:35.939 clat (msec): min=128, max=2388, avg=1024.99, stdev=497.26 00:20:35.939 lat (msec): min=243, max=2513, avg=1034.83, stdev=501.29 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 245], 5.00th=[ 409], 10.00th=[ 464], 20.00th=[ 535], 00:20:35.939 | 30.00th=[ 701], 40.00th=[ 835], 50.00th=[ 927], 60.00th=[ 1020], 00:20:35.939 | 70.00th=[ 1234], 80.00th=[ 1569], 90.00th=[ 1703], 95.00th=[ 2022], 00:20:35.939 | 99.00th=[ 2165], 99.50th=[ 2333], 99.90th=[ 2333], 99.95th=[ 2400], 00:20:35.939 | 99.99th=[ 2400] 00:20:35.939 bw ( KiB/s): min=43008, max=288768, per=5.94%, avg=121632.40, stdev=73487.40, samples=15 00:20:35.939 iops : min= 42, max= 282, avg=118.60, stdev=71.67, samples=15 00:20:35.939 lat (msec) : 250=1.57%, 500=15.51%, 750=15.60%, 1000=25.91%, 2000=35.92% 00:20:35.939 lat (msec) : >=2000=5.50% 00:20:35.939 cpu : usr=0.09%, sys=1.73%, ctx=816, majf=0, minf=32769 00:20:35.939 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.939 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job1: (groupid=0, jobs=1): err= 0: pid=257498: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=4, BW=4297KiB/s (4400kB/s)(51.0MiB/12153msec) 00:20:35.939 slat (usec): min=477, max=4259.4k, avg=196160.07, stdev=716983.03 00:20:35.939 clat (msec): min=2148, max=12151, avg=8223.79, stdev=3932.40 00:20:35.939 lat (msec): min=2157, max=12152, avg=8419.95, stdev=3872.33 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4077], 20.00th=[ 4144], 00:20:35.939 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[10671], 60.00th=[11879], 00:20:35.939 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.939 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.939 | 99.99th=[12147] 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.939 cpu : usr=0.00%, sys=0.30%, ctx=87, majf=0, minf=13057 00:20:35.939 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.939 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job2: (groupid=0, jobs=1): err= 0: pid=257499: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=33, BW=33.4MiB/s (35.1MB/s)(406MiB/12145msec) 00:20:35.939 slat (usec): min=44, max=3021.3k, avg=24708.24, stdev=212503.64 00:20:35.939 clat (msec): min=339, max=8398, avg=2741.85, stdev=2846.97 00:20:35.939 lat (msec): min=341, max=8421, avg=2766.56, stdev=2858.02 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 342], 5.00th=[ 342], 10.00th=[ 342], 20.00th=[ 347], 00:20:35.939 | 30.00th=[ 372], 40.00th=[ 481], 50.00th=[ 2836], 60.00th=[ 3138], 00:20:35.939 | 70.00th=[ 3239], 80.00th=[ 3339], 90.00th=[ 8154], 95.00th=[ 8356], 00:20:35.939 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:20:35.939 | 99.99th=[ 8423] 00:20:35.939 bw ( KiB/s): min= 1838, max=364544, per=6.98%, avg=142795.50, stdev=155328.15, samples=4 00:20:35.939 iops : min= 1, max= 356, avg=139.25, stdev=151.93, samples=4 00:20:35.939 lat (msec) : 500=41.63%, 750=6.40%, 1000=0.25%, 2000=0.25%, >=2000=51.48% 00:20:35.939 cpu : usr=0.02%, sys=0.82%, ctx=407, majf=0, minf=32769 00:20:35.939 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.9%, >=64=84.5% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.939 issued rwts: total=406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job2: (groupid=0, jobs=1): err= 0: pid=257500: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=2, BW=2198KiB/s (2251kB/s)(26.0MiB/12114msec) 00:20:35.939 slat (usec): min=472, max=2120.6k, avg=385143.49, stdev=783928.88 00:20:35.939 clat (msec): min=2099, max=11956, avg=8092.95, stdev=3559.97 00:20:35.939 lat (msec): min=2114, max=12113, avg=8478.10, stdev=3424.74 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4279], 00:20:35.939 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:20:35.939 | 70.00th=[10671], 80.00th=[10671], 90.00th=[11879], 95.00th=[11879], 00:20:35.939 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.939 | 99.99th=[12013] 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.939 cpu : usr=0.01%, sys=0.13%, ctx=51, majf=0, minf=6657 00:20:35.939 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.939 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job2: (groupid=0, jobs=1): err= 0: pid=257501: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(249MiB/12120msec) 00:20:35.939 slat (usec): min=48, max=2155.7k, avg=48340.66, stdev=292233.87 00:20:35.939 clat (msec): min=81, max=11271, avg=5891.59, stdev=4858.24 00:20:35.939 lat (msec): min=458, max=11273, avg=5939.93, stdev=4852.56 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 460], 5.00th=[ 472], 10.00th=[ 485], 20.00th=[ 518], 00:20:35.939 | 30.00th=[ 592], 40.00th=[ 2735], 50.00th=[ 7013], 60.00th=[10671], 00:20:35.939 | 70.00th=[11073], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:20:35.939 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:20:35.939 | 99.99th=[11208] 00:20:35.939 bw ( KiB/s): min= 2048, max=104448, per=2.02%, avg=41301.33, stdev=45820.62, samples=6 00:20:35.939 iops : min= 2, max= 102, avg=40.33, stdev=44.75, samples=6 00:20:35.939 lat (msec) : 100=0.40%, 500=15.26%, 750=18.88%, 1000=3.61%, >=2000=61.85% 00:20:35.939 cpu : usr=0.02%, sys=0.70%, ctx=256, majf=0, minf=32769 00:20:35.939 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:20:35.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.939 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:20:35.939 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.939 job2: (groupid=0, jobs=1): err= 0: pid=257502: Fri Apr 26 14:57:35 2024 00:20:35.939 read: IOPS=10, BW=10.3MiB/s (10.8MB/s)(125MiB/12130msec) 00:20:35.939 slat (usec): min=468, max=2110.1k, avg=80046.45, stdev=365535.95 00:20:35.939 clat (msec): min=2123, max=12128, avg=10655.09, stdev=1996.99 00:20:35.939 lat (msec): min=2146, max=12129, avg=10735.13, stdev=1847.17 00:20:35.939 clat percentiles (msec): 00:20:35.939 | 1.00th=[ 2140], 5.00th=[ 6409], 10.00th=[ 8658], 20.00th=[10805], 00:20:35.939 | 30.00th=[10939], 40.00th=[11208], 50.00th=[11342], 60.00th=[11476], 00:20:35.939 | 70.00th=[11476], 80.00th=[11745], 90.00th=[11879], 95.00th=[12013], 00:20:35.939 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.939 | 99.99th=[12147] 00:20:35.939 lat (msec) : >=2000=100.00% 00:20:35.940 cpu : usr=0.00%, sys=0.67%, ctx=190, majf=0, minf=32001 00:20:35.940 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.940 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257503: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=10, BW=10.8MiB/s (11.3MB/s)(130MiB/12093msec) 00:20:35.940 slat (usec): min=65, max=2175.7k, avg=77697.34, stdev=346096.33 00:20:35.940 clat (msec): min=1990, max=12090, avg=7105.82, stdev=3917.74 00:20:35.940 lat (msec): min=2105, max=12091, avg=7183.52, stdev=3915.66 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 3809], 00:20:35.940 | 30.00th=[ 3910], 40.00th=[ 4010], 50.00th=[ 4279], 60.00th=[ 8557], 00:20:35.940 | 70.00th=[11610], 80.00th=[11610], 90.00th=[11879], 95.00th=[12147], 00:20:35.940 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.940 | 99.99th=[12147] 00:20:35.940 bw ( KiB/s): min= 2007, max= 4096, per=0.15%, avg=3051.50, stdev=1477.15, samples=2 00:20:35.940 iops : min= 1, max= 4, avg= 2.50, stdev= 2.12, samples=2 00:20:35.940 lat (msec) : 2000=0.77%, >=2000=99.23% 00:20:35.940 cpu : usr=0.00%, sys=0.65%, ctx=205, majf=0, minf=32769 00:20:35.940 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:20:35.940 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257504: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=6, BW=6750KiB/s (6912kB/s)(80.0MiB/12136msec) 00:20:35.940 slat (usec): min=461, max=2116.5k, avg=125049.27, stdev=471325.21 00:20:35.940 clat (msec): min=2130, max=12133, avg=9637.69, stdev=3207.97 00:20:35.940 lat (msec): min=2135, max=12135, avg=9762.74, stdev=3104.96 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:20:35.940 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12013], 60.00th=[12013], 00:20:35.940 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.940 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.940 | 99.99th=[12147] 00:20:35.940 lat (msec) : >=2000=100.00% 00:20:35.940 cpu : usr=0.00%, sys=0.41%, ctx=85, majf=0, minf=20481 00:20:35.940 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.940 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257505: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=2, BW=2362KiB/s (2419kB/s)(28.0MiB/12137msec) 00:20:35.940 slat (usec): min=799, max=2147.5k, avg=430003.97, stdev=821419.93 00:20:35.940 clat (msec): min=96, max=12130, avg=8132.61, stdev=4230.73 00:20:35.940 lat (msec): min=2148, max=12136, avg=8562.62, stdev=3988.64 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 96], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4279], 00:20:35.940 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[10671], 60.00th=[10805], 00:20:35.940 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.940 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.940 | 99.99th=[12147] 00:20:35.940 lat (msec) : 100=3.57%, >=2000=96.43% 00:20:35.940 cpu : usr=0.00%, sys=0.16%, ctx=61, majf=0, minf=7169 00:20:35.940 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.940 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257506: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=2, BW=2454KiB/s (2513kB/s)(29.0MiB/12100msec) 00:20:35.940 slat (usec): min=503, max=2165.9k, avg=344840.80, stdev=757881.64 00:20:35.940 clat (msec): min=2098, max=12098, avg=9341.01, stdev=3507.77 00:20:35.940 lat (msec): min=2116, max=12099, avg=9685.85, stdev=3252.63 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:20:35.940 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[11879], 60.00th=[11879], 00:20:35.940 | 70.00th=[11879], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.940 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.940 | 99.99th=[12147] 00:20:35.940 lat (msec) : >=2000=100.00% 00:20:35.940 cpu : usr=0.00%, sys=0.18%, ctx=51, majf=0, minf=7425 00:20:35.940 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.940 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257507: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(322MiB/12128msec) 00:20:35.940 slat (usec): min=52, max=2123.7k, avg=31569.26, stdev=225115.38 00:20:35.940 clat (msec): min=437, max=11091, avg=4590.20, stdev=4275.41 00:20:35.940 lat (msec): min=437, max=11094, avg=4621.77, stdev=4286.11 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 439], 5.00th=[ 460], 10.00th=[ 468], 20.00th=[ 518], 00:20:35.940 | 30.00th=[ 575], 40.00th=[ 684], 50.00th=[ 2769], 60.00th=[ 5000], 00:20:35.940 | 70.00th=[ 6946], 80.00th=[10671], 90.00th=[10939], 95.00th=[11073], 00:20:35.940 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:20:35.940 | 99.99th=[11073] 00:20:35.940 bw ( KiB/s): min= 1946, max=165888, per=2.44%, avg=49907.25, stdev=60636.05, samples=8 00:20:35.940 iops : min= 1, max= 162, avg=48.62, stdev=59.32, samples=8 00:20:35.940 lat (msec) : 500=12.42%, 750=30.12%, 1000=1.55%, 2000=0.31%, >=2000=55.59% 00:20:35.940 cpu : usr=0.01%, sys=0.84%, ctx=265, majf=0, minf=32769 00:20:35.940 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.4% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:20:35.940 issued rwts: total=322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257508: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(174MiB/12144msec) 00:20:35.940 slat (usec): min=65, max=2182.7k, avg=57640.20, stdev=314082.47 00:20:35.940 clat (msec): min=921, max=11627, avg=8450.18, stdev=4155.75 00:20:35.940 lat (msec): min=954, max=11632, avg=8507.82, stdev=4130.07 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 953], 5.00th=[ 978], 10.00th=[ 1070], 20.00th=[ 2165], 00:20:35.940 | 30.00th=[ 6477], 40.00th=[10805], 50.00th=[10939], 60.00th=[11208], 00:20:35.940 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11476], 95.00th=[11476], 00:20:35.940 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:20:35.940 | 99.99th=[11610] 00:20:35.940 bw ( KiB/s): min= 1996, max=65536, per=0.78%, avg=16034.00, stdev=24527.84, samples=6 00:20:35.940 iops : min= 1, max= 64, avg=15.50, stdev=24.06, samples=6 00:20:35.940 lat (msec) : 1000=5.17%, 2000=13.22%, >=2000=81.61% 00:20:35.940 cpu : usr=0.02%, sys=0.59%, ctx=231, majf=0, minf=32769 00:20:35.940 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.4%, >=64=63.8% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:20:35.940 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257509: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(224MiB/12131msec) 00:20:35.940 slat (usec): min=55, max=2138.7k, avg=44694.10, stdev=280338.92 00:20:35.940 clat (msec): min=672, max=11370, avg=6571.83, stdev=4948.32 00:20:35.940 lat (msec): min=675, max=11371, avg=6616.52, stdev=4945.66 00:20:35.940 clat percentiles (msec): 00:20:35.940 | 1.00th=[ 676], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 760], 00:20:35.940 | 30.00th=[ 835], 40.00th=[ 1003], 50.00th=[10671], 60.00th=[10939], 00:20:35.940 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:20:35.940 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:20:35.940 | 99.99th=[11342] 00:20:35.940 bw ( KiB/s): min= 1935, max=180224, per=1.62%, avg=33090.50, stdev=72085.58, samples=6 00:20:35.940 iops : min= 1, max= 176, avg=32.17, stdev=70.47, samples=6 00:20:35.940 lat (msec) : 750=14.29%, 1000=25.89%, >=2000=59.82% 00:20:35.940 cpu : usr=0.00%, sys=0.68%, ctx=175, majf=0, minf=32769 00:20:35.940 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.3%, >=64=71.9% 00:20:35.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.940 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:20:35.940 issued rwts: total=224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.940 job2: (groupid=0, jobs=1): err= 0: pid=257510: Fri Apr 26 14:57:35 2024 00:20:35.940 read: IOPS=9, BW=9379KiB/s (9604kB/s)(111MiB/12119msec) 00:20:35.940 slat (usec): min=402, max=2077.3k, avg=90206.25, stdev=390526.44 00:20:35.941 clat (msec): min=2105, max=12113, avg=10156.76, stdev=2414.21 00:20:35.941 lat (msec): min=2138, max=12118, avg=10246.97, stdev=2294.75 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8557], 00:20:35.941 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[11745], 00:20:35.941 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:20:35.941 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.941 | 99.99th=[12147] 00:20:35.941 lat (msec) : >=2000=100.00% 00:20:35.941 cpu : usr=0.00%, sys=0.54%, ctx=85, majf=0, minf=28417 00:20:35.941 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.2%, 16=14.4%, 32=28.8%, >=64=43.2% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.941 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job2: (groupid=0, jobs=1): err= 0: pid=257511: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=1, BW=1775KiB/s (1818kB/s)(21.0MiB/12113msec) 00:20:35.941 slat (usec): min=1043, max=2138.7k, avg=476416.45, stdev=859058.43 00:20:35.941 clat (msec): min=2107, max=12111, avg=8129.73, stdev=3875.16 00:20:35.941 lat (msec): min=2119, max=12112, avg=8606.15, stdev=3709.18 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 4245], 00:20:35.941 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:20:35.941 | 70.00th=[10805], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:20:35.941 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.941 | 99.99th=[12147] 00:20:35.941 lat (msec) : >=2000=100.00% 00:20:35.941 cpu : usr=0.01%, sys=0.11%, ctx=52, majf=0, minf=5377 00:20:35.941 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.941 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257512: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=60, BW=60.8MiB/s (63.7MB/s)(735MiB/12092msec) 00:20:35.941 slat (usec): min=41, max=2000.6k, avg=13694.62, stdev=95092.29 00:20:35.941 clat (msec): min=518, max=6364, avg=1465.28, stdev=1287.27 00:20:35.941 lat (msec): min=518, max=6366, avg=1478.98, stdev=1298.90 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 567], 5.00th=[ 634], 10.00th=[ 667], 20.00th=[ 735], 00:20:35.941 | 30.00th=[ 776], 40.00th=[ 810], 50.00th=[ 827], 60.00th=[ 919], 00:20:35.941 | 70.00th=[ 1116], 80.00th=[ 2400], 90.00th=[ 2735], 95.00th=[ 3104], 00:20:35.941 | 99.00th=[ 6342], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:20:35.941 | 99.99th=[ 6342] 00:20:35.941 bw ( KiB/s): min= 1885, max=225280, per=6.76%, avg=138335.67, stdev=66088.96, samples=9 00:20:35.941 iops : min= 1, max= 220, avg=135.00, stdev=64.76, samples=9 00:20:35.941 lat (msec) : 750=22.04%, 1000=43.67%, 2000=5.71%, >=2000=28.57% 00:20:35.941 cpu : usr=0.02%, sys=1.30%, ctx=654, majf=0, minf=32769 00:20:35.941 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.941 issued rwts: total=735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257513: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=34, BW=34.3MiB/s (36.0MB/s)(413MiB/12033msec) 00:20:35.941 slat (usec): min=63, max=2080.3k, avg=24214.81, stdev=157366.15 00:20:35.941 clat (msec): min=1077, max=7427, avg=3132.85, stdev=2456.75 00:20:35.941 lat (msec): min=1080, max=7430, avg=3157.06, stdev=2459.07 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 1083], 5.00th=[ 1116], 10.00th=[ 1116], 20.00th=[ 1183], 00:20:35.941 | 30.00th=[ 1385], 40.00th=[ 1569], 50.00th=[ 1670], 60.00th=[ 1720], 00:20:35.941 | 70.00th=[ 4212], 80.00th=[ 6678], 90.00th=[ 7080], 95.00th=[ 7215], 00:20:35.941 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:20:35.941 | 99.99th=[ 7416] 00:20:35.941 bw ( KiB/s): min=10240, max=120832, per=3.18%, avg=65080.89, stdev=46066.09, samples=9 00:20:35.941 iops : min= 10, max= 118, avg=63.56, stdev=44.99, samples=9 00:20:35.941 lat (msec) : 2000=62.47%, >=2000=37.53% 00:20:35.941 cpu : usr=0.05%, sys=0.80%, ctx=697, majf=0, minf=32769 00:20:35.941 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.7% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.941 issued rwts: total=413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257514: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=4, BW=4240KiB/s (4342kB/s)(50.0MiB/12076msec) 00:20:35.941 slat (usec): min=471, max=2107.1k, avg=200051.48, stdev=586765.12 00:20:35.941 clat (msec): min=2072, max=12071, avg=7818.17, stdev=3234.58 00:20:35.941 lat (msec): min=2091, max=12075, avg=8018.22, stdev=3180.85 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 2072], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 4279], 00:20:35.941 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8490], 00:20:35.941 | 70.00th=[10671], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:20:35.941 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.941 | 99.99th=[12013] 00:20:35.941 lat (msec) : >=2000=100.00% 00:20:35.941 cpu : usr=0.01%, sys=0.27%, ctx=73, majf=0, minf=12801 00:20:35.941 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.941 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257515: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=28, BW=28.5MiB/s (29.9MB/s)(345MiB/12086msec) 00:20:35.941 slat (usec): min=53, max=2177.0k, avg=29012.72, stdev=200350.36 00:20:35.941 clat (msec): min=666, max=6111, avg=2981.49, stdev=1975.40 00:20:35.941 lat (msec): min=666, max=6112, avg=3010.50, stdev=1980.24 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 676], 5.00th=[ 743], 10.00th=[ 802], 20.00th=[ 1020], 00:20:35.941 | 30.00th=[ 1045], 40.00th=[ 1167], 50.00th=[ 2769], 60.00th=[ 4329], 00:20:35.941 | 70.00th=[ 4665], 80.00th=[ 5000], 90.00th=[ 5336], 95.00th=[ 6074], 00:20:35.941 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:20:35.941 | 99.99th=[ 6141] 00:20:35.941 bw ( KiB/s): min= 1858, max=145408, per=3.63%, avg=74379.00, stdev=59157.76, samples=6 00:20:35.941 iops : min= 1, max= 142, avg=72.50, stdev=57.97, samples=6 00:20:35.941 lat (msec) : 750=5.51%, 1000=12.17%, 2000=27.25%, >=2000=55.07% 00:20:35.941 cpu : usr=0.02%, sys=1.03%, ctx=266, majf=0, minf=32769 00:20:35.941 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.3%, >=64=81.7% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:20:35.941 issued rwts: total=345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257516: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=2, BW=2450KiB/s (2509kB/s)(29.0MiB/12122msec) 00:20:35.941 slat (msec): min=6, max=2097, avg=414.29, stdev=792.48 00:20:35.941 clat (msec): min=106, max=12114, avg=7624.27, stdev=3632.21 00:20:35.941 lat (msec): min=2142, max=12121, avg=8038.56, stdev=3423.31 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 107], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:20:35.941 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 8658], 00:20:35.941 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:20:35.941 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.941 | 99.99th=[12147] 00:20:35.941 lat (msec) : 250=3.45%, >=2000=96.55% 00:20:35.941 cpu : usr=0.00%, sys=0.21%, ctx=78, majf=0, minf=7425 00:20:35.941 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:20:35.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.941 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.941 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.941 job3: (groupid=0, jobs=1): err= 0: pid=257517: Fri Apr 26 14:57:35 2024 00:20:35.941 read: IOPS=13, BW=13.4MiB/s (14.0MB/s)(162MiB/12093msec) 00:20:35.941 slat (usec): min=117, max=2138.6k, avg=61784.36, stdev=294679.75 00:20:35.941 clat (msec): min=2082, max=8141, avg=6433.49, stdev=1516.61 00:20:35.941 lat (msec): min=2105, max=8159, avg=6495.27, stdev=1437.95 00:20:35.941 clat percentiles (msec): 00:20:35.941 | 1.00th=[ 2089], 5.00th=[ 3943], 10.00th=[ 4077], 20.00th=[ 5336], 00:20:35.941 | 30.00th=[ 6409], 40.00th=[ 6812], 50.00th=[ 6946], 60.00th=[ 7148], 00:20:35.942 | 70.00th=[ 7349], 80.00th=[ 7550], 90.00th=[ 7819], 95.00th=[ 7953], 00:20:35.942 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:20:35.942 | 99.99th=[ 8154] 00:20:35.942 bw ( KiB/s): min= 1954, max=43008, per=0.70%, avg=14317.20, stdev=16465.48, samples=5 00:20:35.942 iops : min= 1, max= 42, avg=13.80, stdev=16.25, samples=5 00:20:35.942 lat (msec) : >=2000=100.00% 00:20:35.942 cpu : usr=0.01%, sys=0.79%, ctx=310, majf=0, minf=32769 00:20:35.942 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:20:35.942 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257518: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=4, BW=4915KiB/s (5033kB/s)(58.0MiB/12084msec) 00:20:35.942 slat (usec): min=431, max=2158.9k, avg=172812.52, stdev=539815.72 00:20:35.942 clat (msec): min=2060, max=12081, avg=10211.31, stdev=2938.12 00:20:35.942 lat (msec): min=2084, max=12083, avg=10384.12, stdev=2738.27 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 2056], 5.00th=[ 2106], 10.00th=[ 6275], 20.00th=[ 8557], 00:20:35.942 | 30.00th=[ 8658], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:20:35.942 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:20:35.942 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.942 | 99.99th=[12147] 00:20:35.942 lat (msec) : >=2000=100.00% 00:20:35.942 cpu : usr=0.00%, sys=0.36%, ctx=95, majf=0, minf=14849 00:20:35.942 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:35.942 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257519: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=1, BW=1859KiB/s (1903kB/s)(22.0MiB/12120msec) 00:20:35.942 slat (msec): min=4, max=2117, avg=546.25, stdev=892.55 00:20:35.942 clat (msec): min=101, max=12041, avg=7431.77, stdev=4080.17 00:20:35.942 lat (msec): min=2128, max=12119, avg=7978.02, stdev=3850.11 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 103], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 2165], 00:20:35.942 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10671], 00:20:35.942 | 70.00th=[10805], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:20:35.942 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.942 | 99.99th=[12013] 00:20:35.942 lat (msec) : 250=4.55%, >=2000=95.45% 00:20:35.942 cpu : usr=0.01%, sys=0.14%, ctx=71, majf=0, minf=5633 00:20:35.942 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:35.942 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257520: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=7, BW=7783KiB/s (7970kB/s)(92.0MiB/12104msec) 00:20:35.942 slat (usec): min=471, max=2083.1k, avg=108898.76, stdev=439140.45 00:20:35.942 clat (msec): min=2084, max=12102, avg=9538.98, stdev=3034.67 00:20:35.942 lat (msec): min=4164, max=12103, avg=9647.88, stdev=2942.59 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:35.942 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:20:35.942 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:20:35.942 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:20:35.942 | 99.99th=[12147] 00:20:35.942 lat (msec) : >=2000=100.00% 00:20:35.942 cpu : usr=0.00%, sys=0.56%, ctx=101, majf=0, minf=23553 00:20:35.942 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.7%, 16=17.4%, 32=34.8%, >=64=31.5% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.942 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257521: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=35, BW=35.7MiB/s (37.5MB/s)(431MiB/12062msec) 00:20:35.942 slat (usec): min=50, max=2096.9k, avg=23549.59, stdev=140839.18 00:20:35.942 clat (msec): min=830, max=7400, avg=2943.97, stdev=2360.97 00:20:35.942 lat (msec): min=833, max=7415, avg=2967.52, stdev=2364.56 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 835], 5.00th=[ 835], 10.00th=[ 860], 20.00th=[ 1062], 00:20:35.942 | 30.00th=[ 1301], 40.00th=[ 1620], 50.00th=[ 1670], 60.00th=[ 2039], 00:20:35.942 | 70.00th=[ 3272], 80.00th=[ 6007], 90.00th=[ 7013], 95.00th=[ 7148], 00:20:35.942 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7416], 99.95th=[ 7416], 00:20:35.942 | 99.99th=[ 7416] 00:20:35.942 bw ( KiB/s): min= 2035, max=131072, per=2.77%, avg=56598.09, stdev=41168.79, samples=11 00:20:35.942 iops : min= 1, max= 128, avg=55.18, stdev=40.34, samples=11 00:20:35.942 lat (msec) : 1000=17.17%, 2000=41.53%, >=2000=41.30% 00:20:35.942 cpu : usr=0.03%, sys=1.05%, ctx=551, majf=0, minf=32769 00:20:35.942 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.4% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.942 issued rwts: total=431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257522: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=37, BW=37.5MiB/s (39.3MB/s)(379MiB/10105msec) 00:20:35.942 slat (usec): min=53, max=1975.3k, avg=26412.77, stdev=183729.14 00:20:35.942 clat (msec): min=91, max=6067, avg=1980.48, stdev=1348.58 00:20:35.942 lat (msec): min=111, max=6069, avg=2006.90, stdev=1360.16 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 130], 5.00th=[ 793], 10.00th=[ 835], 20.00th=[ 1020], 00:20:35.942 | 30.00th=[ 1036], 40.00th=[ 1062], 50.00th=[ 1099], 60.00th=[ 2433], 00:20:35.942 | 70.00th=[ 2735], 80.00th=[ 2869], 90.00th=[ 3004], 95.00th=[ 4799], 00:20:35.942 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:20:35.942 | 99.99th=[ 6074] 00:20:35.942 bw ( KiB/s): min= 8192, max=126976, per=4.18%, avg=85635.67, stdev=51676.02, samples=6 00:20:35.942 iops : min= 8, max= 124, avg=83.50, stdev=50.37, samples=6 00:20:35.942 lat (msec) : 100=0.26%, 250=4.22%, 1000=15.30%, 2000=32.72%, >=2000=47.49% 00:20:35.942 cpu : usr=0.03%, sys=0.85%, ctx=323, majf=0, minf=32769 00:20:35.942 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.942 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257523: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(121MiB/10149msec) 00:20:35.942 slat (usec): min=398, max=2090.7k, avg=83742.50, stdev=347463.06 00:20:35.942 clat (msec): min=15, max=10031, avg=4890.61, stdev=3300.17 00:20:35.942 lat (msec): min=190, max=10148, avg=4974.35, stdev=3303.99 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 192], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 2039], 00:20:35.942 | 30.00th=[ 2165], 40.00th=[ 2333], 50.00th=[ 4463], 60.00th=[ 8020], 00:20:35.942 | 70.00th=[ 8154], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8658], 00:20:35.942 | 99.00th=[ 8792], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:20:35.942 | 99.99th=[10000] 00:20:35.942 lat (msec) : 20=0.83%, 250=10.74%, >=2000=88.43% 00:20:35.942 cpu : usr=0.00%, sys=0.82%, ctx=160, majf=0, minf=30977 00:20:35.942 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.942 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job3: (groupid=0, jobs=1): err= 0: pid=257524: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=31, BW=31.1MiB/s (32.6MB/s)(377MiB/12119msec) 00:20:35.942 slat (usec): min=141, max=2082.8k, avg=31756.81, stdev=193427.18 00:20:35.942 clat (msec): min=142, max=7512, avg=3283.45, stdev=2583.18 00:20:35.942 lat (msec): min=1045, max=7514, avg=3315.20, stdev=2580.60 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 1045], 5.00th=[ 1053], 10.00th=[ 1053], 20.00th=[ 1083], 00:20:35.942 | 30.00th=[ 1301], 40.00th=[ 1569], 50.00th=[ 1838], 60.00th=[ 2072], 00:20:35.942 | 70.00th=[ 6544], 80.00th=[ 6879], 90.00th=[ 7215], 95.00th=[ 7349], 00:20:35.942 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:20:35.942 | 99.99th=[ 7483] 00:20:35.942 bw ( KiB/s): min=10240, max=126976, per=3.11%, avg=63744.00, stdev=53656.46, samples=8 00:20:35.942 iops : min= 10, max= 124, avg=62.25, stdev=52.40, samples=8 00:20:35.942 lat (msec) : 250=0.27%, 2000=56.23%, >=2000=43.50% 00:20:35.942 cpu : usr=0.07%, sys=1.05%, ctx=683, majf=0, minf=32769 00:20:35.942 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:20:35.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.942 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.942 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.942 job4: (groupid=0, jobs=1): err= 0: pid=257525: Fri Apr 26 14:57:35 2024 00:20:35.942 read: IOPS=56, BW=56.1MiB/s (58.8MB/s)(566MiB/10086msec) 00:20:35.942 slat (usec): min=55, max=1947.2k, avg=17712.16, stdev=95361.83 00:20:35.942 clat (msec): min=56, max=3763, avg=1876.76, stdev=999.31 00:20:35.942 lat (msec): min=177, max=3774, avg=1894.47, stdev=1001.56 00:20:35.942 clat percentiles (msec): 00:20:35.942 | 1.00th=[ 222], 5.00th=[ 575], 10.00th=[ 894], 20.00th=[ 1116], 00:20:35.942 | 30.00th=[ 1418], 40.00th=[ 1552], 50.00th=[ 1620], 60.00th=[ 1670], 00:20:35.942 | 70.00th=[ 1754], 80.00th=[ 3507], 90.00th=[ 3608], 95.00th=[ 3675], 00:20:35.942 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:20:35.942 | 99.99th=[ 3775] 00:20:35.942 bw ( KiB/s): min=24576, max=94208, per=3.37%, avg=68989.85, stdev=23678.68, samples=13 00:20:35.943 iops : min= 24, max= 92, avg=67.31, stdev=23.10, samples=13 00:20:35.943 lat (msec) : 100=0.18%, 250=1.06%, 500=3.00%, 750=3.53%, 1000=4.24% 00:20:35.943 lat (msec) : 2000=64.13%, >=2000=23.85% 00:20:35.943 cpu : usr=0.07%, sys=1.53%, ctx=943, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.9% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.943 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257526: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=47, BW=47.0MiB/s (49.3MB/s)(479MiB/10185msec) 00:20:35.943 slat (usec): min=44, max=1994.3k, avg=21099.02, stdev=129911.78 00:20:35.943 clat (msec): min=75, max=5992, avg=2248.73, stdev=1302.18 00:20:35.943 lat (msec): min=191, max=7644, avg=2269.83, stdev=1317.58 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 213], 5.00th=[ 334], 10.00th=[ 542], 20.00th=[ 1053], 00:20:35.943 | 30.00th=[ 1368], 40.00th=[ 1519], 50.00th=[ 1770], 60.00th=[ 2769], 00:20:35.943 | 70.00th=[ 3608], 80.00th=[ 3775], 90.00th=[ 3910], 95.00th=[ 3910], 00:20:35.943 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 6007], 99.95th=[ 6007], 00:20:35.943 | 99.99th=[ 6007] 00:20:35.943 bw ( KiB/s): min=30720, max=143360, per=3.51%, avg=71884.80, stdev=34002.69, samples=10 00:20:35.943 iops : min= 30, max= 140, avg=70.20, stdev=33.21, samples=10 00:20:35.943 lat (msec) : 100=0.21%, 250=3.13%, 500=6.26%, 750=5.22%, 1000=4.59% 00:20:35.943 lat (msec) : 2000=32.99%, >=2000=47.60% 00:20:35.943 cpu : usr=0.03%, sys=1.23%, ctx=720, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.8% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.943 issued rwts: total=479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257527: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=76, BW=76.9MiB/s (80.6MB/s)(777MiB/10103msec) 00:20:35.943 slat (usec): min=54, max=2127.2k, avg=12871.39, stdev=89713.45 00:20:35.943 clat (msec): min=94, max=3013, avg=1344.26, stdev=651.76 00:20:35.943 lat (msec): min=119, max=3017, avg=1357.13, stdev=652.26 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 192], 5.00th=[ 760], 10.00th=[ 835], 20.00th=[ 894], 00:20:35.943 | 30.00th=[ 936], 40.00th=[ 1011], 50.00th=[ 1183], 60.00th=[ 1284], 00:20:35.943 | 70.00th=[ 1351], 80.00th=[ 1452], 90.00th=[ 2668], 95.00th=[ 2836], 00:20:35.943 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 3004], 99.95th=[ 3004], 00:20:35.943 | 99.99th=[ 3004] 00:20:35.943 bw ( KiB/s): min=34816, max=165888, per=5.42%, avg=110911.50, stdev=37199.49, samples=12 00:20:35.943 iops : min= 34, max= 162, avg=108.25, stdev=36.29, samples=12 00:20:35.943 lat (msec) : 100=0.13%, 250=1.80%, 500=0.26%, 750=1.67%, 1000=34.11% 00:20:35.943 lat (msec) : 2000=45.30%, >=2000=16.73% 00:20:35.943 cpu : usr=0.05%, sys=1.81%, ctx=830, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.943 issued rwts: total=777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257528: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=50, BW=50.1MiB/s (52.5MB/s)(505MiB/10085msec) 00:20:35.943 slat (usec): min=44, max=1760.1k, avg=19837.99, stdev=97658.95 00:20:35.943 clat (msec): min=62, max=3675, avg=2047.36, stdev=976.32 00:20:35.943 lat (msec): min=206, max=3689, avg=2067.20, stdev=977.61 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 226], 5.00th=[ 575], 10.00th=[ 835], 20.00th=[ 1318], 00:20:35.943 | 30.00th=[ 1485], 40.00th=[ 1653], 50.00th=[ 1888], 60.00th=[ 2039], 00:20:35.943 | 70.00th=[ 2165], 80.00th=[ 3373], 90.00th=[ 3540], 95.00th=[ 3608], 00:20:35.943 | 99.00th=[ 3675], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:20:35.943 | 99.99th=[ 3675] 00:20:35.943 bw ( KiB/s): min= 4096, max=120832, per=3.14%, avg=64341.33, stdev=31612.60, samples=12 00:20:35.943 iops : min= 4, max= 118, avg=62.83, stdev=30.87, samples=12 00:20:35.943 lat (msec) : 100=0.20%, 250=1.39%, 500=2.77%, 750=4.16%, 1000=4.55% 00:20:35.943 lat (msec) : 2000=42.97%, >=2000=43.96% 00:20:35.943 cpu : usr=0.00%, sys=1.44%, ctx=849, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.943 issued rwts: total=505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257529: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=36, BW=36.9MiB/s (38.7MB/s)(374MiB/10145msec) 00:20:35.943 slat (usec): min=42, max=2064.8k, avg=26912.18, stdev=161245.28 00:20:35.943 clat (msec): min=76, max=5778, avg=2223.76, stdev=1426.00 00:20:35.943 lat (msec): min=190, max=7756, avg=2250.67, stdev=1447.55 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 239], 5.00th=[ 414], 10.00th=[ 659], 20.00th=[ 911], 00:20:35.943 | 30.00th=[ 1217], 40.00th=[ 1334], 50.00th=[ 1435], 60.00th=[ 3440], 00:20:35.943 | 70.00th=[ 3641], 80.00th=[ 3742], 90.00th=[ 3775], 95.00th=[ 3876], 00:20:35.943 | 99.00th=[ 5067], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:20:35.943 | 99.99th=[ 5805] 00:20:35.943 bw ( KiB/s): min=18432, max=135168, per=3.52%, avg=71972.57, stdev=42392.92, samples=7 00:20:35.943 iops : min= 18, max= 132, avg=70.29, stdev=41.40, samples=7 00:20:35.943 lat (msec) : 100=0.27%, 250=1.87%, 500=4.28%, 750=7.22%, 1000=8.82% 00:20:35.943 lat (msec) : 2000=35.29%, >=2000=42.25% 00:20:35.943 cpu : usr=0.04%, sys=1.04%, ctx=602, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.2% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.943 issued rwts: total=374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257530: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(617MiB/12025msec) 00:20:35.943 slat (usec): min=52, max=1853.5k, avg=16227.58, stdev=88457.49 00:20:35.943 clat (msec): min=1062, max=3539, avg=2096.91, stdev=850.98 00:20:35.943 lat (msec): min=1062, max=3563, avg=2113.13, stdev=851.84 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 1099], 5.00th=[ 1217], 10.00th=[ 1267], 20.00th=[ 1334], 00:20:35.943 | 30.00th=[ 1435], 40.00th=[ 1552], 50.00th=[ 1653], 60.00th=[ 2165], 00:20:35.943 | 70.00th=[ 2601], 80.00th=[ 3272], 90.00th=[ 3440], 95.00th=[ 3507], 00:20:35.943 | 99.00th=[ 3540], 99.50th=[ 3540], 99.90th=[ 3540], 99.95th=[ 3540], 00:20:35.943 | 99.99th=[ 3540] 00:20:35.943 bw ( KiB/s): min= 1402, max=129024, per=3.77%, avg=77144.15, stdev=34228.82, samples=13 00:20:35.943 iops : min= 1, max= 126, avg=75.31, stdev=33.49, samples=13 00:20:35.943 lat (msec) : 2000=58.35%, >=2000=41.65% 00:20:35.943 cpu : usr=0.02%, sys=1.26%, ctx=1026, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.943 issued rwts: total=617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257531: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=71, BW=71.1MiB/s (74.6MB/s)(722MiB/10153msec) 00:20:35.943 slat (usec): min=43, max=1883.4k, avg=13872.67, stdev=83139.43 00:20:35.943 clat (msec): min=131, max=3203, avg=1500.77, stdev=871.05 00:20:35.943 lat (msec): min=173, max=3203, avg=1514.64, stdev=873.13 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 184], 5.00th=[ 409], 10.00th=[ 634], 20.00th=[ 877], 00:20:35.943 | 30.00th=[ 936], 40.00th=[ 995], 50.00th=[ 1045], 60.00th=[ 1385], 00:20:35.943 | 70.00th=[ 1989], 80.00th=[ 2232], 90.00th=[ 3037], 95.00th=[ 3104], 00:20:35.943 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:20:35.943 | 99.99th=[ 3205] 00:20:35.943 bw ( KiB/s): min=40960, max=157696, per=4.57%, avg=93556.08, stdev=43009.40, samples=13 00:20:35.943 iops : min= 40, max= 154, avg=91.31, stdev=41.94, samples=13 00:20:35.943 lat (msec) : 250=2.22%, 500=4.43%, 750=6.23%, 1000=29.09%, 2000=28.81% 00:20:35.943 lat (msec) : >=2000=29.22% 00:20:35.943 cpu : usr=0.04%, sys=1.60%, ctx=835, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.943 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.943 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.943 job4: (groupid=0, jobs=1): err= 0: pid=257532: Fri Apr 26 14:57:35 2024 00:20:35.943 read: IOPS=61, BW=61.8MiB/s (64.8MB/s)(628MiB/10159msec) 00:20:35.943 slat (usec): min=43, max=1994.3k, avg=16077.72, stdev=82668.88 00:20:35.943 clat (msec): min=56, max=4642, avg=1946.24, stdev=1144.04 00:20:35.943 lat (msec): min=180, max=4647, avg=1962.32, stdev=1147.82 00:20:35.943 clat percentiles (msec): 00:20:35.943 | 1.00th=[ 279], 5.00th=[ 827], 10.00th=[ 1183], 20.00th=[ 1267], 00:20:35.943 | 30.00th=[ 1368], 40.00th=[ 1418], 50.00th=[ 1469], 60.00th=[ 1586], 00:20:35.943 | 70.00th=[ 1670], 80.00th=[ 3708], 90.00th=[ 4077], 95.00th=[ 4463], 00:20:35.943 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:20:35.943 | 99.99th=[ 4665] 00:20:35.943 bw ( KiB/s): min=16384, max=120832, per=3.34%, avg=68261.20, stdev=27183.03, samples=15 00:20:35.943 iops : min= 16, max= 118, avg=66.60, stdev=26.61, samples=15 00:20:35.943 lat (msec) : 100=0.16%, 250=0.64%, 500=2.07%, 750=1.59%, 1000=1.75% 00:20:35.943 lat (msec) : 2000=71.34%, >=2000=22.45% 00:20:35.943 cpu : usr=0.08%, sys=1.45%, ctx=1030, majf=0, minf=32769 00:20:35.943 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:20:35.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.944 issued rwts: total=628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job4: (groupid=0, jobs=1): err= 0: pid=257533: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=171, BW=171MiB/s (179MB/s)(2061MiB/12048msec) 00:20:35.944 slat (usec): min=58, max=162844, avg=4868.28, stdev=15661.18 00:20:35.944 clat (msec): min=176, max=2339, avg=719.33, stdev=500.48 00:20:35.944 lat (msec): min=178, max=2340, avg=724.20, stdev=501.43 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 321], 00:20:35.944 | 30.00th=[ 380], 40.00th=[ 451], 50.00th=[ 726], 60.00th=[ 802], 00:20:35.944 | 70.00th=[ 877], 80.00th=[ 1011], 90.00th=[ 1133], 95.00th=[ 2198], 00:20:35.944 | 99.00th=[ 2299], 99.50th=[ 2333], 99.90th=[ 2333], 99.95th=[ 2333], 00:20:35.944 | 99.99th=[ 2333] 00:20:35.944 bw ( KiB/s): min= 1954, max=542720, per=9.68%, avg=198036.90, stdev=139982.58, samples=20 00:20:35.944 iops : min= 1, max= 530, avg=193.35, stdev=136.77, samples=20 00:20:35.944 lat (msec) : 250=18.49%, 500=23.34%, 750=11.40%, 1000=25.38%, 2000=15.24% 00:20:35.944 lat (msec) : >=2000=6.16% 00:20:35.944 cpu : usr=0.16%, sys=2.13%, ctx=1445, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.944 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job4: (groupid=0, jobs=1): err= 0: pid=257534: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=62, BW=62.8MiB/s (65.8MB/s)(638MiB/10160msec) 00:20:35.944 slat (usec): min=57, max=1903.6k, avg=15828.16, stdev=77761.06 00:20:35.944 clat (msec): min=56, max=3957, avg=1927.96, stdev=895.29 00:20:35.944 lat (msec): min=180, max=3969, avg=1943.78, stdev=897.41 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 275], 5.00th=[ 642], 10.00th=[ 1062], 20.00th=[ 1267], 00:20:35.944 | 30.00th=[ 1418], 40.00th=[ 1687], 50.00th=[ 1787], 60.00th=[ 1871], 00:20:35.944 | 70.00th=[ 1938], 80.00th=[ 2299], 90.00th=[ 3473], 95.00th=[ 3842], 00:20:35.944 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:20:35.944 | 99.99th=[ 3943] 00:20:35.944 bw ( KiB/s): min=20480, max=122880, per=3.40%, avg=69619.73, stdev=24915.18, samples=15 00:20:35.944 iops : min= 20, max= 120, avg=67.93, stdev=24.28, samples=15 00:20:35.944 lat (msec) : 100=0.16%, 250=0.78%, 500=2.35%, 750=3.13%, 1000=2.35% 00:20:35.944 lat (msec) : 2000=65.20%, >=2000=26.02% 00:20:35.944 cpu : usr=0.07%, sys=1.48%, ctx=1118, majf=0, minf=32572 00:20:35.944 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.944 issued rwts: total=638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job4: (groupid=0, jobs=1): err= 0: pid=257535: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=20, BW=20.1MiB/s (21.1MB/s)(203MiB/10075msec) 00:20:35.944 slat (usec): min=52, max=2068.4k, avg=49320.72, stdev=252364.31 00:20:35.944 clat (msec): min=61, max=8804, avg=2815.13, stdev=2656.91 00:20:35.944 lat (msec): min=206, max=8830, avg=2864.45, stdev=2690.11 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 218], 5.00th=[ 330], 10.00th=[ 439], 20.00th=[ 701], 00:20:35.944 | 30.00th=[ 978], 40.00th=[ 1368], 50.00th=[ 1653], 60.00th=[ 2039], 00:20:35.944 | 70.00th=[ 3910], 80.00th=[ 5873], 90.00th=[ 7752], 95.00th=[ 7953], 00:20:35.944 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:20:35.944 | 99.99th=[ 8792] 00:20:35.944 bw ( KiB/s): min=26624, max=69632, per=2.50%, avg=51200.00, stdev=22152.51, samples=3 00:20:35.944 iops : min= 26, max= 68, avg=50.00, stdev=21.63, samples=3 00:20:35.944 lat (msec) : 100=0.49%, 250=3.94%, 500=8.37%, 750=12.32%, 1000=5.91% 00:20:35.944 lat (msec) : 2000=26.60%, >=2000=42.36% 00:20:35.944 cpu : usr=0.01%, sys=0.79%, ctx=296, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.9%, 32=15.8%, >=64=69.0% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:20:35.944 issued rwts: total=203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job4: (groupid=0, jobs=1): err= 0: pid=257536: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=106, BW=106MiB/s (111MB/s)(1282MiB/12057msec) 00:20:35.944 slat (usec): min=41, max=1132.3k, avg=7795.49, stdev=35500.21 00:20:35.944 clat (msec): min=309, max=3403, avg=997.08, stdev=698.73 00:20:35.944 lat (msec): min=312, max=3463, avg=1004.88, stdev=703.13 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 355], 20.00th=[ 376], 00:20:35.944 | 30.00th=[ 558], 40.00th=[ 760], 50.00th=[ 835], 60.00th=[ 944], 00:20:35.944 | 70.00th=[ 1053], 80.00th=[ 1150], 90.00th=[ 2232], 95.00th=[ 2534], 00:20:35.944 | 99.00th=[ 3306], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3406], 00:20:35.944 | 99.99th=[ 3406] 00:20:35.944 bw ( KiB/s): min=45056, max=369378, per=8.26%, avg=168992.50, stdev=91475.92, samples=14 00:20:35.944 iops : min= 44, max= 360, avg=164.93, stdev=89.23, samples=14 00:20:35.944 lat (msec) : 500=27.77%, 750=10.84%, 1000=26.37%, 2000=21.92%, >=2000=13.10% 00:20:35.944 cpu : usr=0.02%, sys=1.64%, ctx=1111, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.944 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job4: (groupid=0, jobs=1): err= 0: pid=257537: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=42, BW=42.4MiB/s (44.4MB/s)(427MiB/10077msec) 00:20:35.944 slat (usec): min=43, max=2074.3k, avg=23420.17, stdev=146354.71 00:20:35.944 clat (msec): min=72, max=6212, avg=2521.20, stdev=2212.39 00:20:35.944 lat (msec): min=77, max=6220, avg=2544.62, stdev=2216.83 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 203], 5.00th=[ 464], 10.00th=[ 617], 20.00th=[ 802], 00:20:35.944 | 30.00th=[ 919], 40.00th=[ 1116], 50.00th=[ 1452], 60.00th=[ 1586], 00:20:35.944 | 70.00th=[ 3574], 80.00th=[ 5805], 90.00th=[ 6074], 95.00th=[ 6074], 00:20:35.944 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6208], 99.95th=[ 6208], 00:20:35.944 | 99.99th=[ 6208] 00:20:35.944 bw ( KiB/s): min=14336, max=129024, per=3.00%, avg=61440.00, stdev=36686.59, samples=10 00:20:35.944 iops : min= 14, max= 126, avg=60.00, stdev=35.83, samples=10 00:20:35.944 lat (msec) : 100=0.70%, 250=1.17%, 500=4.22%, 750=11.94%, 1000=17.80% 00:20:35.944 lat (msec) : 2000=29.74%, >=2000=34.43% 00:20:35.944 cpu : usr=0.00%, sys=1.27%, ctx=561, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.2% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.944 issued rwts: total=427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job5: (groupid=0, jobs=1): err= 0: pid=257538: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=224, BW=224MiB/s (235MB/s)(2263MiB/10091msec) 00:20:35.944 slat (usec): min=48, max=1071.8k, avg=4434.08, stdev=25971.80 00:20:35.944 clat (msec): min=42, max=2523, avg=455.45, stdev=364.06 00:20:35.944 lat (msec): min=164, max=2582, avg=459.88, stdev=369.08 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 165], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 167], 00:20:35.944 | 30.00th=[ 169], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 506], 00:20:35.944 | 70.00th=[ 684], 80.00th=[ 818], 90.00th=[ 986], 95.00th=[ 1116], 00:20:35.944 | 99.00th=[ 1217], 99.50th=[ 1284], 99.90th=[ 2500], 99.95th=[ 2534], 00:20:35.944 | 99.99th=[ 2534] 00:20:35.944 bw ( KiB/s): min=110592, max=781851, per=14.25%, avg=291633.53, stdev=258376.81, samples=15 00:20:35.944 iops : min= 108, max= 763, avg=284.67, stdev=252.31, samples=15 00:20:35.944 lat (msec) : 50=0.04%, 250=54.71%, 500=4.99%, 750=17.01%, 1000=13.52% 00:20:35.944 lat (msec) : 2000=9.37%, >=2000=0.35% 00:20:35.944 cpu : usr=0.05%, sys=2.42%, ctx=1697, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.944 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job5: (groupid=0, jobs=1): err= 0: pid=257539: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=35, BW=35.9MiB/s (37.6MB/s)(363MiB/10118msec) 00:20:35.944 slat (usec): min=52, max=2073.3k, avg=27601.20, stdev=190243.24 00:20:35.944 clat (msec): min=95, max=6463, avg=1764.69, stdev=1286.45 00:20:35.944 lat (msec): min=123, max=6467, avg=1792.30, stdev=1305.86 00:20:35.944 clat percentiles (msec): 00:20:35.944 | 1.00th=[ 146], 5.00th=[ 726], 10.00th=[ 793], 20.00th=[ 835], 00:20:35.944 | 30.00th=[ 869], 40.00th=[ 902], 50.00th=[ 927], 60.00th=[ 2265], 00:20:35.944 | 70.00th=[ 2567], 80.00th=[ 2735], 90.00th=[ 2903], 95.00th=[ 3037], 00:20:35.944 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:20:35.944 | 99.99th=[ 6477] 00:20:35.944 bw ( KiB/s): min=30720, max=153600, per=4.71%, avg=96310.80, stdev=59703.38, samples=5 00:20:35.944 iops : min= 30, max= 150, avg=94.00, stdev=58.26, samples=5 00:20:35.944 lat (msec) : 100=0.28%, 250=3.86%, 500=0.28%, 750=1.65%, 1000=48.21% 00:20:35.944 lat (msec) : 2000=2.20%, >=2000=43.53% 00:20:35.944 cpu : usr=0.00%, sys=1.20%, ctx=238, majf=0, minf=32769 00:20:35.944 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:20:35.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.944 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.944 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.944 job5: (groupid=0, jobs=1): err= 0: pid=257540: Fri Apr 26 14:57:35 2024 00:20:35.944 read: IOPS=48, BW=48.1MiB/s (50.4MB/s)(486MiB/10106msec) 00:20:35.944 slat (usec): min=52, max=2051.5k, avg=20616.63, stdev=138029.81 00:20:35.944 clat (msec): min=81, max=3506, avg=2116.96, stdev=1039.94 00:20:35.944 lat (msec): min=184, max=3512, avg=2137.58, stdev=1035.63 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 205], 5.00th=[ 877], 10.00th=[ 894], 20.00th=[ 961], 00:20:35.945 | 30.00th=[ 1133], 40.00th=[ 1200], 50.00th=[ 2534], 60.00th=[ 2970], 00:20:35.945 | 70.00th=[ 3104], 80.00th=[ 3171], 90.00th=[ 3239], 95.00th=[ 3306], 00:20:35.945 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3507], 99.95th=[ 3507], 00:20:35.945 | 99.99th=[ 3507] 00:20:35.945 bw ( KiB/s): min= 4096, max=141029, per=3.98%, avg=81453.00, stdev=52514.63, samples=9 00:20:35.945 iops : min= 4, max= 137, avg=79.44, stdev=51.18, samples=9 00:20:35.945 lat (msec) : 100=0.21%, 250=1.44%, 1000=19.34%, 2000=24.49%, >=2000=54.53% 00:20:35.945 cpu : usr=0.03%, sys=1.43%, ctx=463, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.0% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.945 issued rwts: total=486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257541: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=112, BW=112MiB/s (118MB/s)(1365MiB/12176msec) 00:20:35.945 slat (usec): min=57, max=2042.1k, avg=8798.18, stdev=66160.49 00:20:35.945 clat (msec): min=155, max=3194, avg=1015.70, stdev=606.01 00:20:35.945 lat (msec): min=527, max=3194, avg=1024.49, stdev=607.85 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 531], 5.00th=[ 535], 10.00th=[ 542], 20.00th=[ 575], 00:20:35.945 | 30.00th=[ 751], 40.00th=[ 818], 50.00th=[ 877], 60.00th=[ 911], 00:20:35.945 | 70.00th=[ 969], 80.00th=[ 1062], 90.00th=[ 2140], 95.00th=[ 2635], 00:20:35.945 | 99.00th=[ 3071], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:20:35.945 | 99.99th=[ 3205] 00:20:35.945 bw ( KiB/s): min= 2052, max=241664, per=7.29%, avg=149142.82, stdev=56515.59, samples=17 00:20:35.945 iops : min= 2, max= 236, avg=145.65, stdev=55.19, samples=17 00:20:35.945 lat (msec) : 250=0.07%, 750=29.89%, 1000=42.05%, 2000=16.34%, >=2000=11.65% 00:20:35.945 cpu : usr=0.08%, sys=1.48%, ctx=1060, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:35.945 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257542: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=53, BW=53.7MiB/s (56.4MB/s)(545MiB/10140msec) 00:20:35.945 slat (usec): min=53, max=2038.7k, avg=18449.56, stdev=136649.33 00:20:35.945 clat (msec): min=81, max=6997, avg=2252.70, stdev=2205.00 00:20:35.945 lat (msec): min=158, max=6997, avg=2271.15, stdev=2212.51 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 209], 5.00th=[ 384], 10.00th=[ 567], 20.00th=[ 751], 00:20:35.945 | 30.00th=[ 852], 40.00th=[ 995], 50.00th=[ 1062], 60.00th=[ 1116], 00:20:35.945 | 70.00th=[ 2903], 80.00th=[ 4665], 90.00th=[ 6812], 95.00th=[ 6879], 00:20:35.945 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:20:35.945 | 99.99th=[ 7013] 00:20:35.945 bw ( KiB/s): min=12288, max=159744, per=3.79%, avg=77637.82, stdev=52426.04, samples=11 00:20:35.945 iops : min= 12, max= 156, avg=75.82, stdev=51.20, samples=11 00:20:35.945 lat (msec) : 100=0.18%, 250=1.47%, 500=6.24%, 750=11.93%, 1000=20.92% 00:20:35.945 lat (msec) : 2000=25.87%, >=2000=33.39% 00:20:35.945 cpu : usr=0.02%, sys=1.13%, ctx=498, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.945 issued rwts: total=545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257543: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=39, BW=39.3MiB/s (41.2MB/s)(398MiB/10136msec) 00:20:35.945 slat (usec): min=42, max=1906.5k, avg=25123.39, stdev=164112.53 00:20:35.945 clat (msec): min=134, max=8262, avg=1394.16, stdev=1811.42 00:20:35.945 lat (msec): min=136, max=8332, avg=1419.28, stdev=1844.12 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 140], 5.00th=[ 259], 10.00th=[ 464], 20.00th=[ 651], 00:20:35.945 | 30.00th=[ 718], 40.00th=[ 735], 50.00th=[ 768], 60.00th=[ 827], 00:20:35.945 | 70.00th=[ 885], 80.00th=[ 986], 90.00th=[ 3004], 95.00th=[ 6946], 00:20:35.945 | 99.00th=[ 7215], 99.50th=[ 8154], 99.90th=[ 8288], 99.95th=[ 8288], 00:20:35.945 | 99.99th=[ 8288] 00:20:35.945 bw ( KiB/s): min=61440, max=178176, per=6.78%, avg=138752.00, stdev=53728.30, samples=4 00:20:35.945 iops : min= 60, max= 174, avg=135.50, stdev=52.47, samples=4 00:20:35.945 lat (msec) : 250=4.02%, 500=9.55%, 750=29.15%, 1000=39.95%, 2000=2.51% 00:20:35.945 lat (msec) : >=2000=14.82% 00:20:35.945 cpu : usr=0.00%, sys=1.05%, ctx=377, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.945 issued rwts: total=398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257544: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=66, BW=66.2MiB/s (69.5MB/s)(675MiB/10190msec) 00:20:35.945 slat (usec): min=51, max=1957.6k, avg=14854.36, stdev=117363.11 00:20:35.945 clat (msec): min=156, max=3397, avg=1631.54, stdev=1072.22 00:20:35.945 lat (msec): min=235, max=3397, avg=1646.39, stdev=1071.86 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 405], 5.00th=[ 523], 10.00th=[ 609], 20.00th=[ 726], 00:20:35.945 | 30.00th=[ 768], 40.00th=[ 810], 50.00th=[ 885], 60.00th=[ 2198], 00:20:35.945 | 70.00th=[ 2769], 80.00th=[ 2869], 90.00th=[ 3104], 95.00th=[ 3171], 00:20:35.945 | 99.00th=[ 3373], 99.50th=[ 3406], 99.90th=[ 3406], 99.95th=[ 3406], 00:20:35.945 | 99.99th=[ 3406] 00:20:35.945 bw ( KiB/s): min= 6144, max=239616, per=5.47%, avg=112025.60, stdev=83268.82, samples=10 00:20:35.945 iops : min= 6, max= 234, avg=109.40, stdev=81.32, samples=10 00:20:35.945 lat (msec) : 250=0.44%, 500=3.41%, 750=21.48%, 1000=29.33%, 2000=3.85% 00:20:35.945 lat (msec) : >=2000=41.48% 00:20:35.945 cpu : usr=0.05%, sys=1.68%, ctx=534, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:35.945 issued rwts: total=675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257545: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=33, BW=33.7MiB/s (35.4MB/s)(341MiB/10106msec) 00:20:35.945 slat (usec): min=58, max=1956.7k, avg=29335.37, stdev=181683.61 00:20:35.945 clat (msec): min=99, max=6617, avg=2120.04, stdev=1656.40 00:20:35.945 lat (msec): min=144, max=6636, avg=2149.38, stdev=1669.16 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 171], 5.00th=[ 592], 10.00th=[ 667], 20.00th=[ 726], 00:20:35.945 | 30.00th=[ 827], 40.00th=[ 1099], 50.00th=[ 1401], 60.00th=[ 2433], 00:20:35.945 | 70.00th=[ 2702], 80.00th=[ 3071], 90.00th=[ 5403], 95.00th=[ 5403], 00:20:35.945 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:20:35.945 | 99.99th=[ 6611] 00:20:35.945 bw ( KiB/s): min=16384, max=223232, per=4.28%, avg=87654.40, stdev=82116.88, samples=5 00:20:35.945 iops : min= 16, max= 218, avg=85.60, stdev=80.19, samples=5 00:20:35.945 lat (msec) : 100=0.29%, 250=2.05%, 750=19.94%, 1000=15.54%, 2000=15.25% 00:20:35.945 lat (msec) : >=2000=46.92% 00:20:35.945 cpu : usr=0.01%, sys=1.22%, ctx=326, majf=0, minf=32769 00:20:35.945 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.5% 00:20:35.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.945 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:20:35.945 issued rwts: total=341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.945 job5: (groupid=0, jobs=1): err= 0: pid=257546: Fri Apr 26 14:57:35 2024 00:20:35.945 read: IOPS=16, BW=16.6MiB/s (17.5MB/s)(170MiB/10212msec) 00:20:35.945 slat (usec): min=87, max=2038.7k, avg=59257.24, stdev=268667.52 00:20:35.945 clat (msec): min=137, max=8585, avg=5077.43, stdev=2471.11 00:20:35.945 lat (msec): min=242, max=8587, avg=5136.69, stdev=2465.73 00:20:35.945 clat percentiles (msec): 00:20:35.945 | 1.00th=[ 243], 5.00th=[ 1552], 10.00th=[ 1703], 20.00th=[ 2165], 00:20:35.945 | 30.00th=[ 2400], 40.00th=[ 4597], 50.00th=[ 5738], 60.00th=[ 6007], 00:20:35.945 | 70.00th=[ 6342], 80.00th=[ 7953], 90.00th=[ 8221], 95.00th=[ 8423], 00:20:35.945 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:20:35.945 | 99.99th=[ 8557] 00:20:35.945 bw ( KiB/s): min= 8192, max=38912, per=1.05%, avg=21504.00, stdev=13006.55, samples=4 00:20:35.945 iops : min= 8, max= 38, avg=21.00, stdev=12.70, samples=4 00:20:35.946 lat (msec) : 250=1.18%, 500=1.76%, 2000=14.71%, >=2000=82.35% 00:20:35.946 cpu : usr=0.00%, sys=0.90%, ctx=245, majf=0, minf=32769 00:20:35.946 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.4%, 32=18.8%, >=64=62.9% 00:20:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.946 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:20:35.946 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.946 job5: (groupid=0, jobs=1): err= 0: pid=257547: Fri Apr 26 14:57:35 2024 00:20:35.946 read: IOPS=5, BW=5741KiB/s (5879kB/s)(68.0MiB/12128msec) 00:20:35.946 slat (usec): min=368, max=2115.1k, avg=175981.66, stdev=522277.36 00:20:35.946 clat (msec): min=160, max=11947, avg=6397.38, stdev=3101.40 00:20:35.946 lat (msec): min=2275, max=12127, avg=6573.36, stdev=3081.69 00:20:35.946 clat percentiles (msec): 00:20:35.946 | 1.00th=[ 161], 5.00th=[ 2265], 10.00th=[ 2265], 20.00th=[ 2265], 00:20:35.946 | 30.00th=[ 4144], 40.00th=[ 6409], 50.00th=[ 6544], 60.00th=[ 8557], 00:20:35.946 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[10805], 95.00th=[10805], 00:20:35.946 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:20:35.946 | 99.99th=[12013] 00:20:35.946 lat (msec) : 250=1.47%, >=2000=98.53% 00:20:35.946 cpu : usr=0.02%, sys=0.38%, ctx=91, majf=0, minf=17409 00:20:35.946 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:20:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.946 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:35.946 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.946 job5: (groupid=0, jobs=1): err= 0: pid=257548: Fri Apr 26 14:57:35 2024 00:20:35.946 read: IOPS=34, BW=34.9MiB/s (36.6MB/s)(354MiB/10154msec) 00:20:35.946 slat (usec): min=45, max=2043.9k, avg=28261.40, stdev=209439.54 00:20:35.946 clat (msec): min=146, max=9111, avg=3526.68, stdev=3730.94 00:20:35.946 lat (msec): min=154, max=9111, avg=3554.94, stdev=3736.39 00:20:35.946 clat percentiles (msec): 00:20:35.946 | 1.00th=[ 163], 5.00th=[ 472], 10.00th=[ 477], 20.00th=[ 498], 00:20:35.946 | 30.00th=[ 535], 40.00th=[ 584], 50.00th=[ 659], 60.00th=[ 2735], 00:20:35.946 | 70.00th=[ 6812], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9060], 00:20:35.946 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:20:35.946 | 99.99th=[ 9060] 00:20:35.946 bw ( KiB/s): min= 6144, max=192512, per=3.25%, avg=66413.71, stdev=85738.07, samples=7 00:20:35.946 iops : min= 6, max= 188, avg=64.86, stdev=83.73, samples=7 00:20:35.946 lat (msec) : 250=1.41%, 500=18.93%, 750=36.44%, >=2000=43.22% 00:20:35.946 cpu : usr=0.02%, sys=1.02%, ctx=246, majf=0, minf=32769 00:20:35.946 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:20:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.946 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:35.946 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.946 job5: (groupid=0, jobs=1): err= 0: pid=257549: Fri Apr 26 14:57:35 2024 00:20:35.946 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(434MiB/10109msec) 00:20:35.946 slat (usec): min=43, max=2040.7k, avg=23050.41, stdev=175374.08 00:20:35.946 clat (msec): min=101, max=7117, avg=2434.96, stdev=2663.56 00:20:35.946 lat (msec): min=110, max=7128, avg=2458.01, stdev=2667.84 00:20:35.946 clat percentiles (msec): 00:20:35.946 | 1.00th=[ 144], 5.00th=[ 493], 10.00th=[ 502], 20.00th=[ 527], 00:20:35.946 | 30.00th=[ 567], 40.00th=[ 625], 50.00th=[ 684], 60.00th=[ 802], 00:20:35.946 | 70.00th=[ 3037], 80.00th=[ 6678], 90.00th=[ 6879], 95.00th=[ 7013], 00:20:35.946 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 7148], 99.95th=[ 7148], 00:20:35.946 | 99.99th=[ 7148] 00:20:35.946 bw ( KiB/s): min= 8192, max=200704, per=4.39%, avg=89819.43, stdev=89140.39, samples=7 00:20:35.946 iops : min= 8, max= 196, avg=87.71, stdev=87.05, samples=7 00:20:35.946 lat (msec) : 250=2.07%, 500=7.14%, 750=42.17%, 1000=12.90%, 2000=0.69% 00:20:35.946 lat (msec) : >=2000=35.02% 00:20:35.946 cpu : usr=0.01%, sys=1.34%, ctx=271, majf=0, minf=32769 00:20:35.946 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:20:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.946 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.946 issued rwts: total=434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.946 job5: (groupid=0, jobs=1): err= 0: pid=257550: Fri Apr 26 14:57:35 2024 00:20:35.946 read: IOPS=50, BW=50.1MiB/s (52.5MB/s)(510MiB/10189msec) 00:20:35.946 slat (usec): min=41, max=2035.5k, avg=19663.75, stdev=130540.94 00:20:35.946 clat (msec): min=156, max=6857, avg=1572.96, stdev=1605.23 00:20:35.946 lat (msec): min=227, max=6871, avg=1592.62, stdev=1620.79 00:20:35.946 clat percentiles (msec): 00:20:35.946 | 1.00th=[ 257], 5.00th=[ 376], 10.00th=[ 550], 20.00th=[ 802], 00:20:35.946 | 30.00th=[ 852], 40.00th=[ 995], 50.00th=[ 1099], 60.00th=[ 1217], 00:20:35.946 | 70.00th=[ 1401], 80.00th=[ 1536], 90.00th=[ 3540], 95.00th=[ 6678], 00:20:35.946 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:20:35.946 | 99.99th=[ 6879] 00:20:35.946 bw ( KiB/s): min=47104, max=167936, per=5.46%, avg=111762.29, stdev=44065.44, samples=7 00:20:35.946 iops : min= 46, max= 164, avg=109.14, stdev=43.03, samples=7 00:20:35.946 lat (msec) : 250=0.78%, 500=8.63%, 750=6.47%, 1000=24.71%, 2000=48.04% 00:20:35.946 lat (msec) : >=2000=11.37% 00:20:35.946 cpu : usr=0.02%, sys=1.44%, ctx=521, majf=0, minf=32769 00:20:35.946 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:20:35.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.946 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:35.946 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:35.946 00:20:35.946 Run status group 0 (all jobs): 00:20:35.946 READ: bw=1999MiB/s (2096MB/s), 761KiB/s-224MiB/s (779kB/s-235MB/s), io=27.7GiB (29.8GB), run=10075-14207msec 00:20:35.946 00:20:35.946 Disk stats (read/write): 00:20:35.946 nvme0n1: ios=12189/0, merge=0/0, ticks=10729850/0, in_queue=10729850, util=98.90% 00:20:35.946 nvme1n1: ios=35323/0, merge=0/0, ticks=6430740/0, in_queue=6430740, util=98.73% 00:20:35.946 nvme2n1: ios=14878/0, merge=0/0, ticks=7748718/0, in_queue=7748718, util=98.95% 00:20:35.946 nvme3n1: ios=25170/0, merge=0/0, ticks=8388167/0, in_queue=8388167, util=99.06% 00:20:35.946 nvme4n1: ios=74139/0, merge=0/0, ticks=9727387/0, in_queue=9727387, util=99.21% 00:20:35.946 nvme5n1: ios=63721/0, merge=0/0, ticks=10239771/0, in_queue=10239771, util=99.28% 00:20:35.946 14:57:35 -- target/srq_overwhelm.sh@38 -- # sync 00:20:35.946 14:57:35 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:20:35.946 14:57:35 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:35.946 14:57:35 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:20:37.844 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.844 14:57:37 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:20:37.844 14:57:37 -- common/autotest_common.sh@1205 -- # local i=0 00:20:37.844 14:57:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:37.844 14:57:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:20:37.844 14:57:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:37.844 14:57:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:20:37.844 14:57:37 -- common/autotest_common.sh@1217 -- # return 0 00:20:37.844 14:57:37 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.844 14:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.844 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:37.844 14:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.844 14:57:37 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:37.844 14:57:37 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:40.388 14:57:40 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:20:40.388 14:57:40 -- common/autotest_common.sh@1205 -- # local i=0 00:20:40.388 14:57:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:40.388 14:57:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:20:40.388 14:57:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:40.388 14:57:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:20:40.388 14:57:40 -- common/autotest_common.sh@1217 -- # return 0 00:20:40.388 14:57:40 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.388 14:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.388 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:20:40.388 14:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.388 14:57:40 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:40.388 14:57:40 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:42.287 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:42.287 14:57:42 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:20:42.287 14:57:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:42.287 14:57:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:42.287 14:57:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:20:42.287 14:57:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:42.287 14:57:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:20:42.287 14:57:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:42.287 14:57:42 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:42.287 14:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.287 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 14:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.287 14:57:42 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:42.287 14:57:42 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:44.812 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:44.812 14:57:44 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:20:44.812 14:57:44 -- common/autotest_common.sh@1205 -- # local i=0 00:20:44.812 14:57:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:44.812 14:57:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:20:44.812 14:57:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:44.812 14:57:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:20:44.812 14:57:44 -- common/autotest_common.sh@1217 -- # return 0 00:20:44.812 14:57:44 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:44.812 14:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.812 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:20:44.812 14:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.812 14:57:44 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:44.812 14:57:44 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:47.343 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:47.343 14:57:46 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:20:47.343 14:57:46 -- common/autotest_common.sh@1205 -- # local i=0 00:20:47.343 14:57:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:47.343 14:57:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:20:47.343 14:57:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:47.343 14:57:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:20:47.343 14:57:46 -- common/autotest_common.sh@1217 -- # return 0 00:20:47.343 14:57:46 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:47.343 14:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.343 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:20:47.343 14:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.343 14:57:46 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:47.343 14:57:46 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:49.235 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:49.235 14:57:49 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:20:49.235 14:57:49 -- common/autotest_common.sh@1205 -- # local i=0 00:20:49.235 14:57:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:49.235 14:57:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:20:49.235 14:57:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:49.235 14:57:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:20:49.235 14:57:49 -- common/autotest_common.sh@1217 -- # return 0 00:20:49.235 14:57:49 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:49.235 14:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.235 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:20:49.235 14:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.235 14:57:49 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:49.235 14:57:49 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:20:49.235 14:57:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:49.235 14:57:49 -- nvmf/common.sh@117 -- # sync 00:20:49.235 14:57:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:49.235 14:57:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:49.235 14:57:49 -- nvmf/common.sh@120 -- # set +e 00:20:49.235 14:57:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.235 14:57:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:49.235 rmmod nvme_rdma 00:20:49.235 rmmod nvme_fabrics 00:20:49.235 14:57:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.235 14:57:49 -- nvmf/common.sh@124 -- # set -e 00:20:49.235 14:57:49 -- nvmf/common.sh@125 -- # return 0 00:20:49.235 14:57:49 -- nvmf/common.sh@478 -- # '[' -n 253840 ']' 00:20:49.235 14:57:49 -- nvmf/common.sh@479 -- # killprocess 253840 00:20:49.235 14:57:49 -- common/autotest_common.sh@936 -- # '[' -z 253840 ']' 00:20:49.235 14:57:49 -- common/autotest_common.sh@940 -- # kill -0 253840 00:20:49.235 14:57:49 -- common/autotest_common.sh@941 -- # uname 00:20:49.235 14:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:49.235 14:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 253840 00:20:49.235 14:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:49.235 14:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:49.235 14:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 253840' 00:20:49.235 killing process with pid 253840 00:20:49.235 14:57:49 -- common/autotest_common.sh@955 -- # kill 253840 00:20:49.235 14:57:49 -- common/autotest_common.sh@960 -- # wait 253840 00:20:49.492 [2024-04-26 14:57:49.362550] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:52.020 14:57:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.020 14:57:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:52.020 00:20:52.020 real 0m55.985s 00:20:52.020 user 3m30.969s 00:20:52.020 sys 0m11.768s 00:20:52.020 14:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.020 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:52.020 ************************************ 00:20:52.020 END TEST nvmf_srq_overwhelm 00:20:52.020 ************************************ 00:20:52.020 14:57:51 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:52.020 14:57:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:52.020 14:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.020 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:52.020 ************************************ 00:20:52.020 START TEST nvmf_shutdown 00:20:52.020 ************************************ 00:20:52.020 14:57:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:52.020 * Looking for test storage... 00:20:52.020 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:52.020 14:57:51 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.020 14:57:51 -- nvmf/common.sh@7 -- # uname -s 00:20:52.020 14:57:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.020 14:57:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.020 14:57:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.020 14:57:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.020 14:57:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.020 14:57:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.020 14:57:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.020 14:57:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.020 14:57:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.020 14:57:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.020 14:57:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:20:52.020 14:57:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:20:52.020 14:57:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.020 14:57:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.020 14:57:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.020 14:57:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.020 14:57:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:52.020 14:57:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.020 14:57:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.020 14:57:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.020 14:57:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.020 14:57:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.020 14:57:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.021 14:57:51 -- paths/export.sh@5 -- # export PATH 00:20:52.021 14:57:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.021 14:57:51 -- nvmf/common.sh@47 -- # : 0 00:20:52.021 14:57:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.021 14:57:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.021 14:57:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.021 14:57:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.021 14:57:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.021 14:57:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.021 14:57:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.021 14:57:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.021 14:57:51 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:52.021 14:57:51 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:52.021 14:57:51 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:52.021 14:57:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:52.021 14:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.021 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:52.021 ************************************ 00:20:52.021 START TEST nvmf_shutdown_tc1 00:20:52.021 ************************************ 00:20:52.021 14:57:51 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:52.021 14:57:51 -- target/shutdown.sh@74 -- # starttarget 00:20:52.021 14:57:51 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:52.021 14:57:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:52.021 14:57:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.021 14:57:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:52.021 14:57:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:52.021 14:57:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:52.021 14:57:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.021 14:57:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.021 14:57:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.021 14:57:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:52.021 14:57:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:52.021 14:57:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.021 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:53.926 14:57:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:53.926 14:57:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.926 14:57:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.926 14:57:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.926 14:57:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.926 14:57:53 -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.926 14:57:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@296 -- # e810=() 00:20:53.926 14:57:53 -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.926 14:57:53 -- nvmf/common.sh@297 -- # x722=() 00:20:53.926 14:57:53 -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.926 14:57:53 -- nvmf/common.sh@298 -- # mlx=() 00:20:53.926 14:57:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.926 14:57:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.926 14:57:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:20:53.926 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:20:53.926 14:57:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:53.926 14:57:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:20:53.926 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:20:53.926 14:57:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:53.926 14:57:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.926 14:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.926 14:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:20:53.926 Found net devices under 0000:09:00.0: mlx_0_0 00:20:53.926 14:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.926 14:57:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.926 14:57:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:20:53.926 Found net devices under 0000:09:00.1: mlx_0_1 00:20:53.926 14:57:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.926 14:57:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:53.926 14:57:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:53.926 14:57:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:53.926 14:57:53 -- nvmf/common.sh@58 -- # uname 00:20:53.926 14:57:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:53.926 14:57:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:53.926 14:57:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:53.926 14:57:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:53.926 14:57:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:53.926 14:57:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:53.926 14:57:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:53.926 14:57:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:53.926 14:57:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:53.926 14:57:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:53.926 14:57:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:53.926 14:57:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:53.926 14:57:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:53.926 14:57:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:53.926 14:57:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:53.926 14:57:53 -- nvmf/common.sh@105 -- # continue 2 00:20:53.926 14:57:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.926 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:53.926 14:57:53 -- nvmf/common.sh@105 -- # continue 2 00:20:53.926 14:57:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:53.926 14:57:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:53.926 14:57:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:53.926 14:57:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:53.926 14:57:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:53.926 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:53.926 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:20:53.926 altname enp9s0f0np0 00:20:53.926 inet 192.168.100.8/24 scope global mlx_0_0 00:20:53.926 valid_lft forever preferred_lft forever 00:20:53.926 14:57:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:53.926 14:57:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:53.926 14:57:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:53.926 14:57:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:53.926 14:57:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:53.926 14:57:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:53.926 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:53.926 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:20:53.926 altname enp9s0f1np1 00:20:53.926 inet 192.168.100.9/24 scope global mlx_0_1 00:20:53.926 valid_lft forever preferred_lft forever 00:20:53.926 14:57:53 -- nvmf/common.sh@411 -- # return 0 00:20:53.926 14:57:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:53.926 14:57:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:53.926 14:57:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:53.926 14:57:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:53.926 14:57:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:53.926 14:57:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:53.926 14:57:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:53.927 14:57:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:53.927 14:57:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:53.927 14:57:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:53.927 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.927 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:53.927 14:57:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:53.927 14:57:53 -- nvmf/common.sh@105 -- # continue 2 00:20:53.927 14:57:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:53.927 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.927 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:53.927 14:57:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:53.927 14:57:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:53.927 14:57:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:53.927 14:57:53 -- nvmf/common.sh@105 -- # continue 2 00:20:53.927 14:57:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:53.927 14:57:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:53.927 14:57:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:53.927 14:57:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:53.927 14:57:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:53.927 14:57:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:53.927 14:57:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:53.927 14:57:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:53.927 192.168.100.9' 00:20:53.927 14:57:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:53.927 192.168.100.9' 00:20:53.927 14:57:53 -- nvmf/common.sh@446 -- # head -n 1 00:20:53.927 14:57:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:53.927 14:57:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:53.927 192.168.100.9' 00:20:53.927 14:57:53 -- nvmf/common.sh@447 -- # tail -n +2 00:20:53.927 14:57:53 -- nvmf/common.sh@447 -- # head -n 1 00:20:53.927 14:57:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:53.927 14:57:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:53.927 14:57:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:53.927 14:57:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:53.927 14:57:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:53.927 14:57:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:53.927 14:57:53 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:53.927 14:57:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:53.927 14:57:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.927 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.927 14:57:53 -- nvmf/common.sh@470 -- # nvmfpid=262760 00:20:53.927 14:57:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:53.927 14:57:53 -- nvmf/common.sh@471 -- # waitforlisten 262760 00:20:53.927 14:57:53 -- common/autotest_common.sh@817 -- # '[' -z 262760 ']' 00:20:53.927 14:57:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.927 14:57:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.927 14:57:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.927 14:57:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.927 14:57:53 -- common/autotest_common.sh@10 -- # set +x 00:20:53.927 [2024-04-26 14:57:53.674836] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:53.927 [2024-04-26 14:57:53.674970] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.927 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.927 [2024-04-26 14:57:53.807836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.186 [2024-04-26 14:57:54.060764] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.186 [2024-04-26 14:57:54.060837] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.186 [2024-04-26 14:57:54.060865] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.186 [2024-04-26 14:57:54.060888] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.186 [2024-04-26 14:57:54.060907] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.186 [2024-04-26 14:57:54.061059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.186 [2024-04-26 14:57:54.061188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.186 [2024-04-26 14:57:54.061282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.186 [2024-04-26 14:57:54.061286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:54.751 14:57:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.751 14:57:54 -- common/autotest_common.sh@850 -- # return 0 00:20:54.751 14:57:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:54.751 14:57:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:54.751 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:20:54.751 14:57:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.751 14:57:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:54.751 14:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.751 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:20:54.751 [2024-04-26 14:57:54.624772] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000283c0/0x7fe6fc4bd940) succeed. 00:20:54.751 [2024-04-26 14:57:54.636824] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028540/0x7fe6fc477940) succeed. 00:20:55.009 14:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.009 14:57:54 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:55.009 14:57:54 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:55.009 14:57:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:55.009 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:20:55.009 14:57:54 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.009 14:57:54 -- target/shutdown.sh@28 -- # cat 00:20:55.009 14:57:54 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:55.009 14:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.009 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:20:55.009 Malloc1 00:20:55.009 [2024-04-26 14:57:55.067156] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:55.266 Malloc2 00:20:55.266 Malloc3 00:20:55.266 Malloc4 00:20:55.524 Malloc5 00:20:55.524 Malloc6 00:20:55.782 Malloc7 00:20:55.782 Malloc8 00:20:55.782 Malloc9 00:20:56.040 Malloc10 00:20:56.040 14:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.040 14:57:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:56.040 14:57:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:56.040 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:20:56.040 14:57:55 -- target/shutdown.sh@78 -- # perfpid=263090 00:20:56.040 14:57:55 -- target/shutdown.sh@79 -- # waitforlisten 263090 /var/tmp/bdevperf.sock 00:20:56.040 14:57:55 -- common/autotest_common.sh@817 -- # '[' -z 263090 ']' 00:20:56.040 14:57:55 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:56.040 14:57:55 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.040 14:57:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.040 14:57:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.040 14:57:55 -- nvmf/common.sh@521 -- # config=() 00:20:56.040 14:57:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.040 14:57:55 -- nvmf/common.sh@521 -- # local subsystem config 00:20:56.040 14:57:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.040 14:57:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.040 14:57:55 -- common/autotest_common.sh@10 -- # set +x 00:20:56.040 14:57:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.040 { 00:20:56.040 "params": { 00:20:56.040 "name": "Nvme$subsystem", 00:20:56.040 "trtype": "$TEST_TRANSPORT", 00:20:56.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.040 "adrfam": "ipv4", 00:20:56.040 "trsvcid": "$NVMF_PORT", 00:20:56.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.040 "hdgst": ${hdgst:-false}, 00:20:56.040 "ddgst": ${ddgst:-false} 00:20:56.040 }, 00:20:56.040 "method": "bdev_nvme_attach_controller" 00:20:56.040 } 00:20:56.040 EOF 00:20:56.040 )") 00:20:56.040 14:57:55 -- nvmf/common.sh@543 -- # cat 00:20:56.040 14:57:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.040 14:57:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.040 { 00:20:56.040 "params": { 00:20:56.040 "name": "Nvme$subsystem", 00:20:56.040 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:55 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:56.041 { 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme$subsystem", 00:20:56.041 "trtype": "$TEST_TRANSPORT", 00:20:56.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "$NVMF_PORT", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.041 "hdgst": ${hdgst:-false}, 00:20:56.041 "ddgst": ${ddgst:-false} 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 } 00:20:56.041 EOF 00:20:56.041 )") 00:20:56.041 14:57:56 -- nvmf/common.sh@543 -- # cat 00:20:56.041 14:57:56 -- nvmf/common.sh@545 -- # jq . 00:20:56.041 14:57:56 -- nvmf/common.sh@546 -- # IFS=, 00:20:56.041 14:57:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme1", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 },{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme2", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 },{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme3", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 },{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme4", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 },{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme5", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.041 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:56.041 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:56.041 "hdgst": false, 00:20:56.041 "ddgst": false 00:20:56.041 }, 00:20:56.041 "method": "bdev_nvme_attach_controller" 00:20:56.041 },{ 00:20:56.041 "params": { 00:20:56.041 "name": "Nvme6", 00:20:56.041 "trtype": "rdma", 00:20:56.041 "traddr": "192.168.100.8", 00:20:56.041 "adrfam": "ipv4", 00:20:56.041 "trsvcid": "4420", 00:20:56.042 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:56.042 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:56.042 "hdgst": false, 00:20:56.042 "ddgst": false 00:20:56.042 }, 00:20:56.042 "method": "bdev_nvme_attach_controller" 00:20:56.042 },{ 00:20:56.042 "params": { 00:20:56.042 "name": "Nvme7", 00:20:56.042 "trtype": "rdma", 00:20:56.042 "traddr": "192.168.100.8", 00:20:56.042 "adrfam": "ipv4", 00:20:56.042 "trsvcid": "4420", 00:20:56.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:56.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:56.042 "hdgst": false, 00:20:56.042 "ddgst": false 00:20:56.042 }, 00:20:56.042 "method": "bdev_nvme_attach_controller" 00:20:56.042 },{ 00:20:56.042 "params": { 00:20:56.042 "name": "Nvme8", 00:20:56.042 "trtype": "rdma", 00:20:56.042 "traddr": "192.168.100.8", 00:20:56.042 "adrfam": "ipv4", 00:20:56.042 "trsvcid": "4420", 00:20:56.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:56.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:56.042 "hdgst": false, 00:20:56.042 "ddgst": false 00:20:56.042 }, 00:20:56.042 "method": "bdev_nvme_attach_controller" 00:20:56.042 },{ 00:20:56.042 "params": { 00:20:56.042 "name": "Nvme9", 00:20:56.042 "trtype": "rdma", 00:20:56.042 "traddr": "192.168.100.8", 00:20:56.042 "adrfam": "ipv4", 00:20:56.042 "trsvcid": "4420", 00:20:56.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:56.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:56.042 "hdgst": false, 00:20:56.042 "ddgst": false 00:20:56.042 }, 00:20:56.042 "method": "bdev_nvme_attach_controller" 00:20:56.042 },{ 00:20:56.042 "params": { 00:20:56.042 "name": "Nvme10", 00:20:56.042 "trtype": "rdma", 00:20:56.042 "traddr": "192.168.100.8", 00:20:56.042 "adrfam": "ipv4", 00:20:56.042 "trsvcid": "4420", 00:20:56.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:56.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:56.042 "hdgst": false, 00:20:56.042 "ddgst": false 00:20:56.042 }, 00:20:56.042 "method": "bdev_nvme_attach_controller" 00:20:56.042 }' 00:20:56.042 [2024-04-26 14:57:56.061205] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:56.042 [2024-04-26 14:57:56.061337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:56.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.300 [2024-04-26 14:57:56.194210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.558 [2024-04-26 14:57:56.428813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.491 14:57:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.491 14:57:57 -- common/autotest_common.sh@850 -- # return 0 00:20:57.491 14:57:57 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:57.491 14:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.491 14:57:57 -- common/autotest_common.sh@10 -- # set +x 00:20:57.491 14:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.491 14:57:57 -- target/shutdown.sh@83 -- # kill -9 263090 00:20:57.491 14:57:57 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:57.491 14:57:57 -- target/shutdown.sh@87 -- # sleep 1 00:20:58.862 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 263090 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:58.862 14:57:58 -- target/shutdown.sh@88 -- # kill -0 262760 00:20:58.862 14:57:58 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:58.862 14:57:58 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:58.862 14:57:58 -- nvmf/common.sh@521 -- # config=() 00:20:58.862 14:57:58 -- nvmf/common.sh@521 -- # local subsystem config 00:20:58.862 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.862 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.862 { 00:20:58.862 "params": { 00:20:58.862 "name": "Nvme$subsystem", 00:20:58.862 "trtype": "$TEST_TRANSPORT", 00:20:58.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.862 "adrfam": "ipv4", 00:20:58.862 "trsvcid": "$NVMF_PORT", 00:20:58.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.862 "hdgst": ${hdgst:-false}, 00:20:58.862 "ddgst": ${ddgst:-false} 00:20:58.862 }, 00:20:58.862 "method": "bdev_nvme_attach_controller" 00:20:58.862 } 00:20:58.862 EOF 00:20:58.862 )") 00:20:58.862 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.862 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.862 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.862 { 00:20:58.862 "params": { 00:20:58.862 "name": "Nvme$subsystem", 00:20:58.862 "trtype": "$TEST_TRANSPORT", 00:20:58.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.862 "adrfam": "ipv4", 00:20:58.862 "trsvcid": "$NVMF_PORT", 00:20:58.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.862 "hdgst": ${hdgst:-false}, 00:20:58.862 "ddgst": ${ddgst:-false} 00:20:58.862 }, 00:20:58.862 "method": "bdev_nvme_attach_controller" 00:20:58.862 } 00:20:58.862 EOF 00:20:58.862 )") 00:20:58.862 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:58.863 { 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme$subsystem", 00:20:58.863 "trtype": "$TEST_TRANSPORT", 00:20:58.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "$NVMF_PORT", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.863 "hdgst": ${hdgst:-false}, 00:20:58.863 "ddgst": ${ddgst:-false} 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 } 00:20:58.863 EOF 00:20:58.863 )") 00:20:58.863 14:57:58 -- nvmf/common.sh@543 -- # cat 00:20:58.863 14:57:58 -- nvmf/common.sh@545 -- # jq . 00:20:58.863 14:57:58 -- nvmf/common.sh@546 -- # IFS=, 00:20:58.863 14:57:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme1", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme2", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme3", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme4", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme5", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme6", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme7", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme8", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme9", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 },{ 00:20:58.863 "params": { 00:20:58.863 "name": "Nvme10", 00:20:58.863 "trtype": "rdma", 00:20:58.863 "traddr": "192.168.100.8", 00:20:58.863 "adrfam": "ipv4", 00:20:58.863 "trsvcid": "4420", 00:20:58.863 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:58.863 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:58.863 "hdgst": false, 00:20:58.863 "ddgst": false 00:20:58.863 }, 00:20:58.863 "method": "bdev_nvme_attach_controller" 00:20:58.863 }' 00:20:58.863 [2024-04-26 14:57:58.618552] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:58.863 [2024-04-26 14:57:58.618692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263497 ] 00:20:58.863 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.863 [2024-04-26 14:57:58.748197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.121 [2024-04-26 14:57:58.982324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.499 Running I/O for 1 seconds... 00:21:01.428 00:21:01.428 Latency(us) 00:21:01.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.428 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.428 Verification LBA range: start 0x0 length 0x400 00:21:01.428 Nvme1n1 : 1.22 262.86 16.43 0.00 0.00 235482.98 39030.33 253211.69 00:21:01.428 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.428 Verification LBA range: start 0x0 length 0x400 00:21:01.428 Nvme2n1 : 1.22 262.39 16.40 0.00 0.00 230959.18 41166.32 243891.01 00:21:01.428 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.428 Verification LBA range: start 0x0 length 0x400 00:21:01.428 Nvme3n1 : 1.22 270.12 16.88 0.00 0.00 220411.03 6553.60 234570.33 00:21:01.428 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.428 Verification LBA range: start 0x0 length 0x400 00:21:01.428 Nvme4n1 : 1.22 281.18 17.57 0.00 0.00 206057.59 9514.86 166218.71 00:21:01.428 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.428 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme5n1 : 1.22 267.85 16.74 0.00 0.00 209246.32 14369.37 150684.25 00:21:01.429 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.429 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme6n1 : 1.23 273.26 17.08 0.00 0.00 203021.33 18155.90 137479.96 00:21:01.429 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.429 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme7n1 : 1.24 310.42 19.40 0.00 0.00 184423.41 7767.23 127382.57 00:21:01.429 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.429 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme8n1 : 1.24 309.70 19.36 0.00 0.00 181642.18 9175.04 142140.30 00:21:01.429 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.429 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme9n1 : 1.24 308.91 19.31 0.00 0.00 179136.41 10825.58 158451.48 00:21:01.429 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:01.429 Verification LBA range: start 0x0 length 0x400 00:21:01.429 Nvme10n1 : 1.23 259.37 16.21 0.00 0.00 208994.91 16990.81 205054.86 00:21:01.429 =================================================================================================================== 00:21:01.429 Total : 2806.04 175.38 0.00 0.00 204627.96 6553.60 253211.69 00:21:02.801 14:58:02 -- target/shutdown.sh@94 -- # stoptarget 00:21:02.801 14:58:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:02.801 14:58:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.801 14:58:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.801 14:58:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:02.801 14:58:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:02.801 14:58:02 -- nvmf/common.sh@117 -- # sync 00:21:02.801 14:58:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:02.801 14:58:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:02.801 14:58:02 -- nvmf/common.sh@120 -- # set +e 00:21:02.801 14:58:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.801 14:58:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:02.801 rmmod nvme_rdma 00:21:02.801 rmmod nvme_fabrics 00:21:02.801 14:58:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.801 14:58:02 -- nvmf/common.sh@124 -- # set -e 00:21:02.801 14:58:02 -- nvmf/common.sh@125 -- # return 0 00:21:02.801 14:58:02 -- nvmf/common.sh@478 -- # '[' -n 262760 ']' 00:21:02.801 14:58:02 -- nvmf/common.sh@479 -- # killprocess 262760 00:21:02.801 14:58:02 -- common/autotest_common.sh@936 -- # '[' -z 262760 ']' 00:21:02.801 14:58:02 -- common/autotest_common.sh@940 -- # kill -0 262760 00:21:02.801 14:58:02 -- common/autotest_common.sh@941 -- # uname 00:21:02.801 14:58:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.801 14:58:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262760 00:21:02.801 14:58:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:02.801 14:58:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:02.801 14:58:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262760' 00:21:02.801 killing process with pid 262760 00:21:02.801 14:58:02 -- common/autotest_common.sh@955 -- # kill 262760 00:21:02.801 14:58:02 -- common/autotest_common.sh@960 -- # wait 262760 00:21:03.366 [2024-04-26 14:58:03.157605] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:06.647 14:58:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:06.647 14:58:06 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:06.647 00:21:06.647 real 0m14.261s 00:21:06.647 user 0m50.204s 00:21:06.647 sys 0m2.949s 00:21:06.647 14:58:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:06.647 14:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.648 ************************************ 00:21:06.648 END TEST nvmf_shutdown_tc1 00:21:06.648 ************************************ 00:21:06.648 14:58:06 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:06.648 14:58:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:06.648 14:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:06.648 14:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.648 ************************************ 00:21:06.648 START TEST nvmf_shutdown_tc2 00:21:06.648 ************************************ 00:21:06.648 14:58:06 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:21:06.648 14:58:06 -- target/shutdown.sh@99 -- # starttarget 00:21:06.648 14:58:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:06.648 14:58:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:06.648 14:58:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.648 14:58:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:06.648 14:58:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:06.648 14:58:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.648 14:58:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.648 14:58:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.648 14:58:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.648 14:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.648 14:58:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:06.648 14:58:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.648 14:58:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.648 14:58:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.648 14:58:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.648 14:58:06 -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.648 14:58:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@296 -- # e810=() 00:21:06.648 14:58:06 -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.648 14:58:06 -- nvmf/common.sh@297 -- # x722=() 00:21:06.648 14:58:06 -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.648 14:58:06 -- nvmf/common.sh@298 -- # mlx=() 00:21:06.648 14:58:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.648 14:58:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.648 14:58:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:06.648 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:06.648 14:58:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:06.648 14:58:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:06.648 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:06.648 14:58:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:06.648 14:58:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.648 14:58:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.648 14:58:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:06.648 Found net devices under 0000:09:00.0: mlx_0_0 00:21:06.648 14:58:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.648 14:58:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.648 14:58:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:06.648 Found net devices under 0000:09:00.1: mlx_0_1 00:21:06.648 14:58:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.648 14:58:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:06.648 14:58:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:06.648 14:58:06 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:06.648 14:58:06 -- nvmf/common.sh@58 -- # uname 00:21:06.648 14:58:06 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:06.648 14:58:06 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:06.648 14:58:06 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:06.648 14:58:06 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:06.648 14:58:06 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:06.648 14:58:06 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:06.648 14:58:06 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:06.648 14:58:06 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:06.648 14:58:06 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:06.648 14:58:06 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:06.648 14:58:06 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:06.648 14:58:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:06.648 14:58:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:06.648 14:58:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:06.648 14:58:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:06.648 14:58:06 -- nvmf/common.sh@105 -- # continue 2 00:21:06.648 14:58:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.648 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:06.648 14:58:06 -- nvmf/common.sh@105 -- # continue 2 00:21:06.648 14:58:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:06.648 14:58:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:06.648 14:58:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:06.648 14:58:06 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:06.648 14:58:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:06.648 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:06.648 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:21:06.648 altname enp9s0f0np0 00:21:06.648 inet 192.168.100.8/24 scope global mlx_0_0 00:21:06.648 valid_lft forever preferred_lft forever 00:21:06.648 14:58:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:06.648 14:58:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:06.648 14:58:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:06.648 14:58:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:06.648 14:58:06 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:06.648 14:58:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:06.648 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:06.648 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:21:06.648 altname enp9s0f1np1 00:21:06.648 inet 192.168.100.9/24 scope global mlx_0_1 00:21:06.648 valid_lft forever preferred_lft forever 00:21:06.648 14:58:06 -- nvmf/common.sh@411 -- # return 0 00:21:06.648 14:58:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:06.648 14:58:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:06.648 14:58:06 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:06.648 14:58:06 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:06.648 14:58:06 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:06.648 14:58:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:06.648 14:58:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:06.649 14:58:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:06.649 14:58:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:06.649 14:58:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:06.649 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.649 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:06.649 14:58:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:06.649 14:58:06 -- nvmf/common.sh@105 -- # continue 2 00:21:06.649 14:58:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:06.649 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.649 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:06.649 14:58:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:06.649 14:58:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:06.649 14:58:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:06.649 14:58:06 -- nvmf/common.sh@105 -- # continue 2 00:21:06.649 14:58:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:06.649 14:58:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:06.649 14:58:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:06.649 14:58:06 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:06.649 14:58:06 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:06.649 14:58:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:06.649 14:58:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:06.649 14:58:06 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:06.649 192.168.100.9' 00:21:06.649 14:58:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:06.649 192.168.100.9' 00:21:06.649 14:58:06 -- nvmf/common.sh@446 -- # head -n 1 00:21:06.649 14:58:06 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:06.649 14:58:06 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:06.649 192.168.100.9' 00:21:06.649 14:58:06 -- nvmf/common.sh@447 -- # tail -n +2 00:21:06.649 14:58:06 -- nvmf/common.sh@447 -- # head -n 1 00:21:06.649 14:58:06 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:06.649 14:58:06 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:06.649 14:58:06 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:06.649 14:58:06 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:06.649 14:58:06 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:06.649 14:58:06 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:06.649 14:58:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:06.649 14:58:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:06.649 14:58:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:06.649 14:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.649 14:58:06 -- nvmf/common.sh@470 -- # nvmfpid=264535 00:21:06.649 14:58:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:06.649 14:58:06 -- nvmf/common.sh@471 -- # waitforlisten 264535 00:21:06.649 14:58:06 -- common/autotest_common.sh@817 -- # '[' -z 264535 ']' 00:21:06.649 14:58:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.649 14:58:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.649 14:58:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.649 14:58:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.649 14:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:06.649 [2024-04-26 14:58:06.361591] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:06.649 [2024-04-26 14:58:06.361722] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.649 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.649 [2024-04-26 14:58:06.495259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.906 [2024-04-26 14:58:06.744218] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.906 [2024-04-26 14:58:06.744280] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.906 [2024-04-26 14:58:06.744312] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.906 [2024-04-26 14:58:06.744331] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.906 [2024-04-26 14:58:06.744346] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.906 [2024-04-26 14:58:06.744489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.906 [2024-04-26 14:58:06.744602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.906 [2024-04-26 14:58:06.744644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.906 [2024-04-26 14:58:06.744651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:07.472 14:58:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.472 14:58:07 -- common/autotest_common.sh@850 -- # return 0 00:21:07.472 14:58:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.472 14:58:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.472 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.472 14:58:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.472 14:58:07 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:07.472 14:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.472 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.472 [2024-04-26 14:58:07.354811] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000283c0/0x7f30d21bd940) succeed. 00:21:07.472 [2024-04-26 14:58:07.366159] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028540/0x7f30d2179940) succeed. 00:21:07.730 14:58:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:07.730 14:58:07 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:07.730 14:58:07 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:07.730 14:58:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.730 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.730 14:58:07 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:07.730 14:58:07 -- target/shutdown.sh@28 -- # cat 00:21:07.730 14:58:07 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:07.730 14:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:07.730 14:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:07.730 Malloc1 00:21:07.730 [2024-04-26 14:58:07.787864] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:07.987 Malloc2 00:21:07.987 Malloc3 00:21:07.987 Malloc4 00:21:08.246 Malloc5 00:21:08.246 Malloc6 00:21:08.504 Malloc7 00:21:08.504 Malloc8 00:21:08.504 Malloc9 00:21:08.763 Malloc10 00:21:08.763 14:58:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.763 14:58:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:08.763 14:58:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:08.763 14:58:08 -- common/autotest_common.sh@10 -- # set +x 00:21:08.763 14:58:08 -- target/shutdown.sh@103 -- # perfpid=264851 00:21:08.763 14:58:08 -- target/shutdown.sh@104 -- # waitforlisten 264851 /var/tmp/bdevperf.sock 00:21:08.763 14:58:08 -- common/autotest_common.sh@817 -- # '[' -z 264851 ']' 00:21:08.763 14:58:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.763 14:58:08 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:08.763 14:58:08 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:08.763 14:58:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.763 14:58:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.763 14:58:08 -- nvmf/common.sh@521 -- # config=() 00:21:08.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.763 14:58:08 -- nvmf/common.sh@521 -- # local subsystem config 00:21:08.763 14:58:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.763 14:58:08 -- common/autotest_common.sh@10 -- # set +x 00:21:08.763 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.763 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.763 { 00:21:08.763 "params": { 00:21:08.763 "name": "Nvme$subsystem", 00:21:08.763 "trtype": "$TEST_TRANSPORT", 00:21:08.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.763 "adrfam": "ipv4", 00:21:08.763 "trsvcid": "$NVMF_PORT", 00:21:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.763 "hdgst": ${hdgst:-false}, 00:21:08.763 "ddgst": ${ddgst:-false} 00:21:08.763 }, 00:21:08.763 "method": "bdev_nvme_attach_controller" 00:21:08.763 } 00:21:08.763 EOF 00:21:08.763 )") 00:21:08.763 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.763 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.763 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.763 { 00:21:08.763 "params": { 00:21:08.763 "name": "Nvme$subsystem", 00:21:08.763 "trtype": "$TEST_TRANSPORT", 00:21:08.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.763 "adrfam": "ipv4", 00:21:08.763 "trsvcid": "$NVMF_PORT", 00:21:08.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.763 "hdgst": ${hdgst:-false}, 00:21:08.763 "ddgst": ${ddgst:-false} 00:21:08.763 }, 00:21:08.763 "method": "bdev_nvme_attach_controller" 00:21:08.763 } 00:21:08.763 EOF 00:21:08.763 )") 00:21:08.763 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:08.764 { 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme$subsystem", 00:21:08.764 "trtype": "$TEST_TRANSPORT", 00:21:08.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "$NVMF_PORT", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.764 "hdgst": ${hdgst:-false}, 00:21:08.764 "ddgst": ${ddgst:-false} 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 } 00:21:08.764 EOF 00:21:08.764 )") 00:21:08.764 14:58:08 -- nvmf/common.sh@543 -- # cat 00:21:08.764 14:58:08 -- nvmf/common.sh@545 -- # jq . 00:21:08.764 14:58:08 -- nvmf/common.sh@546 -- # IFS=, 00:21:08.764 14:58:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme1", 00:21:08.764 "trtype": "rdma", 00:21:08.764 "traddr": "192.168.100.8", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "4420", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.764 "hdgst": false, 00:21:08.764 "ddgst": false 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 },{ 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme2", 00:21:08.764 "trtype": "rdma", 00:21:08.764 "traddr": "192.168.100.8", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "4420", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:08.764 "hdgst": false, 00:21:08.764 "ddgst": false 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 },{ 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme3", 00:21:08.764 "trtype": "rdma", 00:21:08.764 "traddr": "192.168.100.8", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "4420", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:08.764 "hdgst": false, 00:21:08.764 "ddgst": false 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 },{ 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme4", 00:21:08.764 "trtype": "rdma", 00:21:08.764 "traddr": "192.168.100.8", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "4420", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:08.764 "hdgst": false, 00:21:08.764 "ddgst": false 00:21:08.764 }, 00:21:08.764 "method": "bdev_nvme_attach_controller" 00:21:08.764 },{ 00:21:08.764 "params": { 00:21:08.764 "name": "Nvme5", 00:21:08.764 "trtype": "rdma", 00:21:08.764 "traddr": "192.168.100.8", 00:21:08.764 "adrfam": "ipv4", 00:21:08.764 "trsvcid": "4420", 00:21:08.764 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:08.764 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:08.764 "hdgst": false, 00:21:08.764 "ddgst": false 00:21:08.764 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 },{ 00:21:08.765 "params": { 00:21:08.765 "name": "Nvme6", 00:21:08.765 "trtype": "rdma", 00:21:08.765 "traddr": "192.168.100.8", 00:21:08.765 "adrfam": "ipv4", 00:21:08.765 "trsvcid": "4420", 00:21:08.765 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:08.765 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:08.765 "hdgst": false, 00:21:08.765 "ddgst": false 00:21:08.765 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 },{ 00:21:08.765 "params": { 00:21:08.765 "name": "Nvme7", 00:21:08.765 "trtype": "rdma", 00:21:08.765 "traddr": "192.168.100.8", 00:21:08.765 "adrfam": "ipv4", 00:21:08.765 "trsvcid": "4420", 00:21:08.765 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:08.765 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:08.765 "hdgst": false, 00:21:08.765 "ddgst": false 00:21:08.765 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 },{ 00:21:08.765 "params": { 00:21:08.765 "name": "Nvme8", 00:21:08.765 "trtype": "rdma", 00:21:08.765 "traddr": "192.168.100.8", 00:21:08.765 "adrfam": "ipv4", 00:21:08.765 "trsvcid": "4420", 00:21:08.765 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:08.765 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:08.765 "hdgst": false, 00:21:08.765 "ddgst": false 00:21:08.765 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 },{ 00:21:08.765 "params": { 00:21:08.765 "name": "Nvme9", 00:21:08.765 "trtype": "rdma", 00:21:08.765 "traddr": "192.168.100.8", 00:21:08.765 "adrfam": "ipv4", 00:21:08.765 "trsvcid": "4420", 00:21:08.765 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:08.765 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:08.765 "hdgst": false, 00:21:08.765 "ddgst": false 00:21:08.765 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 },{ 00:21:08.765 "params": { 00:21:08.765 "name": "Nvme10", 00:21:08.765 "trtype": "rdma", 00:21:08.765 "traddr": "192.168.100.8", 00:21:08.765 "adrfam": "ipv4", 00:21:08.765 "trsvcid": "4420", 00:21:08.765 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:08.765 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:08.765 "hdgst": false, 00:21:08.765 "ddgst": false 00:21:08.765 }, 00:21:08.765 "method": "bdev_nvme_attach_controller" 00:21:08.765 }' 00:21:08.765 [2024-04-26 14:58:08.773830] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:08.765 [2024-04-26 14:58:08.773983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264851 ] 00:21:09.023 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.023 [2024-04-26 14:58:08.906349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.282 [2024-04-26 14:58:09.138688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.656 14:58:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.656 14:58:10 -- common/autotest_common.sh@850 -- # return 0 00:21:10.656 14:58:10 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:10.656 14:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.656 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:10.656 Running I/O for 10 seconds... 00:21:10.656 14:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.656 14:58:10 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:10.656 14:58:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:10.656 14:58:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:10.656 14:58:10 -- target/shutdown.sh@57 -- # local ret=1 00:21:10.656 14:58:10 -- target/shutdown.sh@58 -- # local i 00:21:10.656 14:58:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:10.656 14:58:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:10.656 14:58:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.656 14:58:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.656 14:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.656 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:10.656 14:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.656 14:58:10 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:10.656 14:58:10 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:10.656 14:58:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:10.914 14:58:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:10.914 14:58:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:10.914 14:58:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.914 14:58:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.914 14:58:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.914 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:10.914 14:58:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.914 14:58:10 -- target/shutdown.sh@60 -- # read_io_count=86 00:21:10.914 14:58:10 -- target/shutdown.sh@63 -- # '[' 86 -ge 100 ']' 00:21:10.914 14:58:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:11.172 14:58:11 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:11.172 14:58:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:11.172 14:58:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:11.172 14:58:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:11.172 14:58:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.172 14:58:11 -- common/autotest_common.sh@10 -- # set +x 00:21:11.431 14:58:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.431 14:58:11 -- target/shutdown.sh@60 -- # read_io_count=214 00:21:11.431 14:58:11 -- target/shutdown.sh@63 -- # '[' 214 -ge 100 ']' 00:21:11.431 14:58:11 -- target/shutdown.sh@64 -- # ret=0 00:21:11.431 14:58:11 -- target/shutdown.sh@65 -- # break 00:21:11.431 14:58:11 -- target/shutdown.sh@69 -- # return 0 00:21:11.431 14:58:11 -- target/shutdown.sh@110 -- # killprocess 264851 00:21:11.431 14:58:11 -- common/autotest_common.sh@936 -- # '[' -z 264851 ']' 00:21:11.431 14:58:11 -- common/autotest_common.sh@940 -- # kill -0 264851 00:21:11.431 14:58:11 -- common/autotest_common.sh@941 -- # uname 00:21:11.431 14:58:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.431 14:58:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 264851 00:21:11.431 14:58:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:11.431 14:58:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:11.431 14:58:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 264851' 00:21:11.431 killing process with pid 264851 00:21:11.431 14:58:11 -- common/autotest_common.sh@955 -- # kill 264851 00:21:11.431 14:58:11 -- common/autotest_common.sh@960 -- # wait 264851 00:21:11.689 Received shutdown signal, test time was about 1.321536 seconds 00:21:11.689 00:21:11.689 Latency(us) 00:21:11.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme1n1 : 1.30 261.25 16.33 0.00 0.00 239732.88 16311.18 256318.58 00:21:11.689 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme2n1 : 1.30 264.74 16.55 0.00 0.00 233224.62 15922.82 245444.46 00:21:11.689 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme3n1 : 1.30 276.60 17.29 0.00 0.00 220097.36 6189.51 231463.44 00:21:11.689 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme4n1 : 1.30 294.52 18.41 0.00 0.00 203742.56 8980.86 167772.16 00:21:11.689 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme5n1 : 1.31 276.40 17.27 0.00 0.00 213510.47 14369.37 212822.09 00:21:11.689 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme6n1 : 1.31 293.35 18.33 0.00 0.00 198770.03 14563.56 152237.70 00:21:11.689 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme7n1 : 1.31 292.83 18.30 0.00 0.00 195087.93 17767.54 142140.30 00:21:11.689 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.689 Verification LBA range: start 0x0 length 0x400 00:21:11.689 Nvme8n1 : 1.31 292.25 18.27 0.00 0.00 192494.55 19515.16 134373.07 00:21:11.690 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.690 Verification LBA range: start 0x0 length 0x400 00:21:11.690 Nvme9n1 : 1.32 291.55 18.22 0.00 0.00 190504.20 21068.61 126605.84 00:21:11.690 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.690 Verification LBA range: start 0x0 length 0x400 00:21:11.690 Nvme10n1 : 1.32 242.38 15.15 0.00 0.00 224941.25 17961.72 273406.48 00:21:11.690 =================================================================================================================== 00:21:11.690 Total : 2785.87 174.12 0.00 0.00 210302.51 6189.51 273406.48 00:21:13.065 14:58:12 -- target/shutdown.sh@113 -- # sleep 1 00:21:14.002 14:58:13 -- target/shutdown.sh@114 -- # kill -0 264535 00:21:14.002 14:58:13 -- target/shutdown.sh@116 -- # stoptarget 00:21:14.002 14:58:13 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:14.003 14:58:13 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:14.003 14:58:13 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.003 14:58:13 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:14.003 14:58:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:14.003 14:58:13 -- nvmf/common.sh@117 -- # sync 00:21:14.003 14:58:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:14.003 14:58:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:14.003 14:58:13 -- nvmf/common.sh@120 -- # set +e 00:21:14.003 14:58:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.003 14:58:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:14.003 rmmod nvme_rdma 00:21:14.003 rmmod nvme_fabrics 00:21:14.003 14:58:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.003 14:58:13 -- nvmf/common.sh@124 -- # set -e 00:21:14.003 14:58:13 -- nvmf/common.sh@125 -- # return 0 00:21:14.003 14:58:13 -- nvmf/common.sh@478 -- # '[' -n 264535 ']' 00:21:14.003 14:58:13 -- nvmf/common.sh@479 -- # killprocess 264535 00:21:14.003 14:58:13 -- common/autotest_common.sh@936 -- # '[' -z 264535 ']' 00:21:14.003 14:58:13 -- common/autotest_common.sh@940 -- # kill -0 264535 00:21:14.003 14:58:13 -- common/autotest_common.sh@941 -- # uname 00:21:14.003 14:58:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.003 14:58:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 264535 00:21:14.003 14:58:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:14.003 14:58:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:14.003 14:58:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 264535' 00:21:14.003 killing process with pid 264535 00:21:14.003 14:58:13 -- common/autotest_common.sh@955 -- # kill 264535 00:21:14.003 14:58:13 -- common/autotest_common.sh@960 -- # wait 264535 00:21:14.569 [2024-04-26 14:58:14.417731] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:17.858 14:58:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:17.858 14:58:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:17.858 00:21:17.858 real 0m11.162s 00:21:17.858 user 0m43.021s 00:21:17.858 sys 0m1.481s 00:21:17.858 14:58:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:17.858 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.858 ************************************ 00:21:17.858 END TEST nvmf_shutdown_tc2 00:21:17.858 ************************************ 00:21:17.858 14:58:17 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:17.858 14:58:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:17.858 14:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:17.858 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.858 ************************************ 00:21:17.858 START TEST nvmf_shutdown_tc3 00:21:17.858 ************************************ 00:21:17.858 14:58:17 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:21:17.858 14:58:17 -- target/shutdown.sh@121 -- # starttarget 00:21:17.858 14:58:17 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:17.858 14:58:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:17.858 14:58:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.858 14:58:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:17.858 14:58:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:17.858 14:58:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:17.858 14:58:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.858 14:58:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.858 14:58:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.858 14:58:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:17.858 14:58:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.858 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.858 14:58:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:17.858 14:58:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.858 14:58:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.858 14:58:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.858 14:58:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.858 14:58:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.858 14:58:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.858 14:58:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.858 14:58:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.858 14:58:17 -- nvmf/common.sh@296 -- # e810=() 00:21:17.858 14:58:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.858 14:58:17 -- nvmf/common.sh@297 -- # x722=() 00:21:17.858 14:58:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.858 14:58:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:17.858 14:58:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.858 14:58:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.858 14:58:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.858 14:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.858 14:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:17.858 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:17.858 14:58:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.858 14:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.858 14:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:17.858 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:17.858 14:58:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:17.858 14:58:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.858 14:58:17 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.858 14:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.858 14:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.858 14:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.858 14:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:17.858 Found net devices under 0000:09:00.0: mlx_0_0 00:21:17.858 14:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.858 14:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.858 14:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.858 14:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.858 14:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:17.858 Found net devices under 0000:09:00.1: mlx_0_1 00:21:17.858 14:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.858 14:58:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:17.858 14:58:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:17.858 14:58:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:17.858 14:58:17 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:17.858 14:58:17 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:17.858 14:58:17 -- nvmf/common.sh@58 -- # uname 00:21:17.858 14:58:17 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:17.858 14:58:17 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:17.858 14:58:17 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:17.859 14:58:17 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:17.859 14:58:17 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:17.859 14:58:17 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:17.859 14:58:17 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:17.859 14:58:17 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:17.859 14:58:17 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:17.859 14:58:17 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:17.859 14:58:17 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:17.859 14:58:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.859 14:58:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:17.859 14:58:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:17.859 14:58:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.859 14:58:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:17.859 14:58:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@105 -- # continue 2 00:21:17.859 14:58:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@105 -- # continue 2 00:21:17.859 14:58:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:17.859 14:58:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:17.859 14:58:17 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:17.859 14:58:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:17.859 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.859 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:21:17.859 altname enp9s0f0np0 00:21:17.859 inet 192.168.100.8/24 scope global mlx_0_0 00:21:17.859 valid_lft forever preferred_lft forever 00:21:17.859 14:58:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:17.859 14:58:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:17.859 14:58:17 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:17.859 14:58:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:17.859 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:17.859 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:21:17.859 altname enp9s0f1np1 00:21:17.859 inet 192.168.100.9/24 scope global mlx_0_1 00:21:17.859 valid_lft forever preferred_lft forever 00:21:17.859 14:58:17 -- nvmf/common.sh@411 -- # return 0 00:21:17.859 14:58:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:17.859 14:58:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:17.859 14:58:17 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:17.859 14:58:17 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:17.859 14:58:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:17.859 14:58:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:17.859 14:58:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:17.859 14:58:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:17.859 14:58:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:17.859 14:58:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@105 -- # continue 2 00:21:17.859 14:58:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:17.859 14:58:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:17.859 14:58:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@105 -- # continue 2 00:21:17.859 14:58:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:17.859 14:58:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:17.859 14:58:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:17.859 14:58:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:17.859 14:58:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:17.859 14:58:17 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:17.859 192.168.100.9' 00:21:17.859 14:58:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:17.859 192.168.100.9' 00:21:17.859 14:58:17 -- nvmf/common.sh@446 -- # head -n 1 00:21:17.859 14:58:17 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:17.859 14:58:17 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:17.859 192.168.100.9' 00:21:17.859 14:58:17 -- nvmf/common.sh@447 -- # tail -n +2 00:21:17.859 14:58:17 -- nvmf/common.sh@447 -- # head -n 1 00:21:17.859 14:58:17 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:17.859 14:58:17 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:17.859 14:58:17 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:17.859 14:58:17 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:17.859 14:58:17 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:17.859 14:58:17 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:17.859 14:58:17 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:17.859 14:58:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:17.859 14:58:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.859 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.859 14:58:17 -- nvmf/common.sh@470 -- # nvmfpid=266042 00:21:17.859 14:58:17 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:17.859 14:58:17 -- nvmf/common.sh@471 -- # waitforlisten 266042 00:21:17.859 14:58:17 -- common/autotest_common.sh@817 -- # '[' -z 266042 ']' 00:21:17.859 14:58:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.859 14:58:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.859 14:58:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.859 14:58:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.859 14:58:17 -- common/autotest_common.sh@10 -- # set +x 00:21:17.859 [2024-04-26 14:58:17.649075] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:17.859 [2024-04-26 14:58:17.649232] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.859 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.859 [2024-04-26 14:58:17.770507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.118 [2024-04-26 14:58:18.017315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.118 [2024-04-26 14:58:18.017372] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.118 [2024-04-26 14:58:18.017406] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.118 [2024-04-26 14:58:18.017441] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.118 [2024-04-26 14:58:18.017457] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.118 [2024-04-26 14:58:18.017615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.118 [2024-04-26 14:58:18.017652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.119 [2024-04-26 14:58:18.017680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.119 [2024-04-26 14:58:18.017694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:18.684 14:58:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.684 14:58:18 -- common/autotest_common.sh@850 -- # return 0 00:21:18.684 14:58:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:18.684 14:58:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.684 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:18.684 14:58:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.684 14:58:18 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:18.684 14:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.685 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:18.685 [2024-04-26 14:58:18.609272] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000283c0/0x7f1ce0a07940) succeed. 00:21:18.685 [2024-04-26 14:58:18.619998] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028540/0x7f1ce09c0940) succeed. 00:21:18.943 14:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.943 14:58:18 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:18.943 14:58:18 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:18.943 14:58:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:18.943 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:18.943 14:58:18 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:18.943 14:58:18 -- target/shutdown.sh@28 -- # cat 00:21:18.943 14:58:18 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:18.943 14:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.943 14:58:18 -- common/autotest_common.sh@10 -- # set +x 00:21:18.943 Malloc1 00:21:19.201 [2024-04-26 14:58:19.036953] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:19.201 Malloc2 00:21:19.201 Malloc3 00:21:19.461 Malloc4 00:21:19.461 Malloc5 00:21:19.461 Malloc6 00:21:19.721 Malloc7 00:21:19.721 Malloc8 00:21:19.981 Malloc9 00:21:19.981 Malloc10 00:21:19.981 14:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.981 14:58:19 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:19.981 14:58:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.981 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:21:19.981 14:58:19 -- target/shutdown.sh@125 -- # perfpid=266356 00:21:19.981 14:58:19 -- target/shutdown.sh@126 -- # waitforlisten 266356 /var/tmp/bdevperf.sock 00:21:19.981 14:58:19 -- common/autotest_common.sh@817 -- # '[' -z 266356 ']' 00:21:19.981 14:58:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.981 14:58:19 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:19.981 14:58:19 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:19.981 14:58:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.981 14:58:19 -- nvmf/common.sh@521 -- # config=() 00:21:19.981 14:58:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.981 14:58:19 -- nvmf/common.sh@521 -- # local subsystem config 00:21:19.981 14:58:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.981 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.981 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:21:19.981 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.981 { 00:21:19.981 "params": { 00:21:19.981 "name": "Nvme$subsystem", 00:21:19.981 "trtype": "$TEST_TRANSPORT", 00:21:19.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.981 "adrfam": "ipv4", 00:21:19.981 "trsvcid": "$NVMF_PORT", 00:21:19.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.981 "hdgst": ${hdgst:-false}, 00:21:19.981 "ddgst": ${ddgst:-false} 00:21:19.981 }, 00:21:19.981 "method": "bdev_nvme_attach_controller" 00:21:19.981 } 00:21:19.981 EOF 00:21:19.981 )") 00:21:19.981 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.981 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.981 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.981 { 00:21:19.981 "params": { 00:21:19.981 "name": "Nvme$subsystem", 00:21:19.981 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.982 { 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme$subsystem", 00:21:19.982 "trtype": "$TEST_TRANSPORT", 00:21:19.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "$NVMF_PORT", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.982 "hdgst": ${hdgst:-false}, 00:21:19.982 "ddgst": ${ddgst:-false} 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 } 00:21:19.982 EOF 00:21:19.982 )") 00:21:19.982 14:58:19 -- nvmf/common.sh@543 -- # cat 00:21:19.982 14:58:19 -- nvmf/common.sh@545 -- # jq . 00:21:19.982 14:58:19 -- nvmf/common.sh@546 -- # IFS=, 00:21:19.982 14:58:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme1", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "4420", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.982 "hdgst": false, 00:21:19.982 "ddgst": false 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 },{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme2", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "4420", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:19.982 "hdgst": false, 00:21:19.982 "ddgst": false 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 },{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme3", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "4420", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:19.982 "hdgst": false, 00:21:19.982 "ddgst": false 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 },{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme4", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "4420", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:19.982 "hdgst": false, 00:21:19.982 "ddgst": false 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 },{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme5", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.982 "adrfam": "ipv4", 00:21:19.982 "trsvcid": "4420", 00:21:19.982 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:19.982 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:19.982 "hdgst": false, 00:21:19.982 "ddgst": false 00:21:19.982 }, 00:21:19.982 "method": "bdev_nvme_attach_controller" 00:21:19.982 },{ 00:21:19.982 "params": { 00:21:19.982 "name": "Nvme6", 00:21:19.982 "trtype": "rdma", 00:21:19.982 "traddr": "192.168.100.8", 00:21:19.983 "adrfam": "ipv4", 00:21:19.983 "trsvcid": "4420", 00:21:19.983 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:19.983 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:19.983 "hdgst": false, 00:21:19.983 "ddgst": false 00:21:19.983 }, 00:21:19.983 "method": "bdev_nvme_attach_controller" 00:21:19.983 },{ 00:21:19.983 "params": { 00:21:19.983 "name": "Nvme7", 00:21:19.983 "trtype": "rdma", 00:21:19.983 "traddr": "192.168.100.8", 00:21:19.983 "adrfam": "ipv4", 00:21:19.983 "trsvcid": "4420", 00:21:19.983 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:19.983 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:19.983 "hdgst": false, 00:21:19.983 "ddgst": false 00:21:19.983 }, 00:21:19.983 "method": "bdev_nvme_attach_controller" 00:21:19.983 },{ 00:21:19.983 "params": { 00:21:19.983 "name": "Nvme8", 00:21:19.983 "trtype": "rdma", 00:21:19.983 "traddr": "192.168.100.8", 00:21:19.983 "adrfam": "ipv4", 00:21:19.983 "trsvcid": "4420", 00:21:19.983 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:19.983 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:19.983 "hdgst": false, 00:21:19.983 "ddgst": false 00:21:19.983 }, 00:21:19.983 "method": "bdev_nvme_attach_controller" 00:21:19.983 },{ 00:21:19.983 "params": { 00:21:19.983 "name": "Nvme9", 00:21:19.983 "trtype": "rdma", 00:21:19.983 "traddr": "192.168.100.8", 00:21:19.983 "adrfam": "ipv4", 00:21:19.983 "trsvcid": "4420", 00:21:19.983 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:19.983 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:19.983 "hdgst": false, 00:21:19.983 "ddgst": false 00:21:19.983 }, 00:21:19.983 "method": "bdev_nvme_attach_controller" 00:21:19.983 },{ 00:21:19.983 "params": { 00:21:19.983 "name": "Nvme10", 00:21:19.983 "trtype": "rdma", 00:21:19.983 "traddr": "192.168.100.8", 00:21:19.983 "adrfam": "ipv4", 00:21:19.983 "trsvcid": "4420", 00:21:19.983 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:19.983 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:19.983 "hdgst": false, 00:21:19.983 "ddgst": false 00:21:19.983 }, 00:21:19.983 "method": "bdev_nvme_attach_controller" 00:21:19.983 }' 00:21:19.983 [2024-04-26 14:58:20.035320] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:19.983 [2024-04-26 14:58:20.035504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266356 ] 00:21:20.242 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.242 [2024-04-26 14:58:20.166554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.500 [2024-04-26 14:58:20.400016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.876 14:58:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:21.876 14:58:21 -- common/autotest_common.sh@850 -- # return 0 00:21:21.876 14:58:21 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:21.876 14:58:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.876 14:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:21.876 Running I/O for 10 seconds... 00:21:21.876 14:58:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.876 14:58:21 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.876 14:58:21 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:21.876 14:58:21 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:21.876 14:58:21 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:21.876 14:58:21 -- target/shutdown.sh@57 -- # local ret=1 00:21:21.876 14:58:21 -- target/shutdown.sh@58 -- # local i 00:21:21.876 14:58:21 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:21.876 14:58:21 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:21.876 14:58:21 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:21.876 14:58:21 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:21.876 14:58:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.876 14:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:21.876 14:58:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.876 14:58:21 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:21.876 14:58:21 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:21.876 14:58:21 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:22.136 14:58:22 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:22.136 14:58:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:22.136 14:58:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:22.136 14:58:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:22.136 14:58:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.136 14:58:22 -- common/autotest_common.sh@10 -- # set +x 00:21:22.395 14:58:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.395 14:58:22 -- target/shutdown.sh@60 -- # read_io_count=91 00:21:22.395 14:58:22 -- target/shutdown.sh@63 -- # '[' 91 -ge 100 ']' 00:21:22.395 14:58:22 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:22.654 14:58:22 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:22.654 14:58:22 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:22.654 14:58:22 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:22.654 14:58:22 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:22.654 14:58:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.654 14:58:22 -- common/autotest_common.sh@10 -- # set +x 00:21:22.654 14:58:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.654 14:58:22 -- target/shutdown.sh@60 -- # read_io_count=219 00:21:22.654 14:58:22 -- target/shutdown.sh@63 -- # '[' 219 -ge 100 ']' 00:21:22.654 14:58:22 -- target/shutdown.sh@64 -- # ret=0 00:21:22.654 14:58:22 -- target/shutdown.sh@65 -- # break 00:21:22.654 14:58:22 -- target/shutdown.sh@69 -- # return 0 00:21:22.654 14:58:22 -- target/shutdown.sh@135 -- # killprocess 266042 00:21:22.654 14:58:22 -- common/autotest_common.sh@936 -- # '[' -z 266042 ']' 00:21:22.654 14:58:22 -- common/autotest_common.sh@940 -- # kill -0 266042 00:21:22.654 14:58:22 -- common/autotest_common.sh@941 -- # uname 00:21:22.654 14:58:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.654 14:58:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 266042 00:21:22.913 14:58:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:22.913 14:58:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:22.913 14:58:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 266042' 00:21:22.913 killing process with pid 266042 00:21:22.913 14:58:22 -- common/autotest_common.sh@955 -- # kill 266042 00:21:22.913 14:58:22 -- common/autotest_common.sh@960 -- # wait 266042 00:21:23.483 [2024-04-26 14:58:23.305713] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:24.058 [2024-04-26 14:58:23.890656] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000199567c0 was disconnected and freed. reset controller. 00:21:24.058 [2024-04-26 14:58:23.892762] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019956540 was disconnected and freed. reset controller. 00:21:24.058 [2024-04-26 14:58:23.894619] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000199562c0 was disconnected and freed. reset controller. 00:21:24.058 [2024-04-26 14:58:23.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfb40 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afa80 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.894827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059f9c0 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000058f900 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.894925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057f840 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.894973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056f780 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f6c0 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f600 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f540 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f480 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f3c0 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000050f300 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff240 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef180 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df0c0 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf000 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bef40 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004aee80 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049edc0 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.058 [2024-04-26 14:58:23.895678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048ed00 len:0x10000 key:0x186f00 00:21:24.058 [2024-04-26 14:58:23.895700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047ec40 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046eb80 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045eac0 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044ea00 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043e940 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.895960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042e880 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.895981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041e7c0 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.896029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040e700 len:0x10000 key:0x186f00 00:21:24.059 [2024-04-26 14:58:23.896076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019beffc0 len:0x10000 key:0x186e00 00:21:24.059 [2024-04-26 14:58:23.896122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff00 len:0x10000 key:0x186e00 00:21:24.059 [2024-04-26 14:58:23.896189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcfe40 len:0x10000 key:0x186e00 00:21:24.059 [2024-04-26 14:58:23.896236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef0c0 len:0x10000 key:0x187500 00:21:24.059 [2024-04-26 14:58:23.896284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebaf000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd0000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf1000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec12000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec33000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec54000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec75000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec96000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecb7000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecd8000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecf9000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1a000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed3b000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.896961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed5c000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.896984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed7d000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed9e000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c167000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c188000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000119d5000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000119f6000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012404000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e3000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c2000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.059 [2024-04-26 14:58:23.897477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a1000 len:0x10000 key:0x187800 00:21:24.059 [2024-04-26 14:58:23.897499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012380000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001235f000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceae000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8d000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6c000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4b000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2a000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.897861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce09000 len:0x10000 key:0x187800 00:21:24.060 [2024-04-26 14:58:23.897887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e2 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.899691] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019956040 was disconnected and freed. reset controller. 00:21:24.060 [2024-04-26 14:58:23.899750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f480 len:0x10000 key:0x186e00 00:21:24.060 [2024-04-26 14:58:23.899777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.899823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f3c0 len:0x10000 key:0x186e00 00:21:24.060 [2024-04-26 14:58:23.899849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.899876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f300 len:0x10000 key:0x186e00 00:21:24.060 [2024-04-26 14:58:23.899899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.899924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019deffc0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.899946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.899971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff00 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.899993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcfe40 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfd80 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafcc0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fc00 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fb40 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fa80 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6f9c0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5f900 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4f840 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3f780 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2f6c0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f600 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f540 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff480 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef3c0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf300 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf240 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf180 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf0c0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.900955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.900979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f000 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.901001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.901025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8ef40 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.901046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7ee80 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.901092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.060 [2024-04-26 14:58:23.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6edc0 len:0x10000 key:0x2d600 00:21:24.060 [2024-04-26 14:58:23.901164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5ed00 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4ec40 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3eb80 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2eac0 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1ea00 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0e940 len:0x10000 key:0x2d600 00:21:24.061 [2024-04-26 14:58:23.901471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019feffc0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff00 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcfe40 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfd80 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafcc0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fc00 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fb40 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fa80 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6f9c0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5f900 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.901964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4f840 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.901989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3f780 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2f6c0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f600 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f540 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff480 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef3c0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf300 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf240 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf180 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf0c0 len:0x10000 key:0x188100 00:21:24.061 [2024-04-26 14:58:23.902557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019abfb40 len:0x10000 key:0x186e00 00:21:24.061 [2024-04-26 14:58:23.902609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1be000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19d000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17c000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15b000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13a000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f119000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f8000 len:0x10000 key:0x187800 00:21:24.061 [2024-04-26 14:58:23.902955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.061 [2024-04-26 14:58:23.902980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d7000 len:0x10000 key:0x187800 00:21:24.062 [2024-04-26 14:58:23.903002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905045] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e6c0 was disconnected and freed. reset controller. 00:21:24.062 [2024-04-26 14:58:23.905094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f540 len:0x10000 key:0x188500 00:21:24.062 [2024-04-26 14:58:23.905119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f480 len:0x10000 key:0x188500 00:21:24.062 [2024-04-26 14:58:23.905207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f3c0 len:0x10000 key:0x188500 00:21:24.062 [2024-04-26 14:58:23.905260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f300 len:0x10000 key:0x188500 00:21:24.062 [2024-04-26 14:58:23.905309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f000 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8ef40 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7ee80 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6edc0 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5ed00 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4ec40 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3eb80 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2eac0 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1ea00 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0e940 len:0x10000 key:0x188100 00:21:24.062 [2024-04-26 14:58:23.905789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3effc0 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.905849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff00 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.905904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cfe40 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.905949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.905973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfd80 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.905995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afcc0 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fc00 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fb40 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fa80 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36f9c0 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35f900 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34f840 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33f780 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32f6c0 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f600 len:0x10000 key:0x187f00 00:21:24.062 [2024-04-26 14:58:23.906526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.062 [2024-04-26 14:58:23.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f540 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff480 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef3c0 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df300 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf240 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf180 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af0c0 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f000 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28ef40 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.906969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27ee80 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.906991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26edc0 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25ed00 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24ec40 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23eb80 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22eac0 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21ea00 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20e940 len:0x10000 key:0x187f00 00:21:24.063 [2024-04-26 14:58:23.907357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5effc0 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff00 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cfe40 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfd80 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afcc0 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fc00 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fb40 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fa80 len:0x10000 key:0x240fd 00:21:24.063 [2024-04-26 14:58:23.907756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efd80 len:0x10000 key:0x188500 00:21:24.063 [2024-04-26 14:58:23.907801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012446000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.907847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012425000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.907892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d208000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.907937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.907961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1e7000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.907982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1c6000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1a5000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130e8000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130c7000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a6000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.908280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013085000 len:0x10000 key:0x187800 00:21:24.063 [2024-04-26 14:58:23.908302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.063 [2024-04-26 14:58:23.910415] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e440 was disconnected and freed. reset controller. 00:21:24.064 [2024-04-26 14:58:23.910472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74f840 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73f780 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72f6c0 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f600 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f540 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff480 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef3c0 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df300 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf240 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf180 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.910974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af0c0 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.910996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f000 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68ef40 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67ee80 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66edc0 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65ed00 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64ec40 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63eb80 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62eac0 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61ea00 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60e940 len:0x10000 key:0x188400 00:21:24.064 [2024-04-26 14:58:23.911525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9effc0 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff00 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cfe40 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfd80 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afcc0 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fc00 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fb40 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fa80 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96f9c0 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.911974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95f900 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.911996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94f840 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93f780 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92f6c0 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f600 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f540 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.064 [2024-04-26 14:58:23.912285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff480 len:0x10000 key:0x187b00 00:21:24.064 [2024-04-26 14:58:23.912307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef3c0 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df300 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf240 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf180 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af0c0 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f000 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88ef40 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87ee80 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86edc0 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85ed00 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.912807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a46f780 len:0x10000 key:0x240fd 00:21:24.065 [2024-04-26 14:58:23.912852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b4000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d5000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.912943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.912978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f6000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b82000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba3000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc4000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be5000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c06000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c27000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c48000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c69000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8a000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cab000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ccc000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ced000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.913635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d0e000 len:0x10000 key:0x187800 00:21:24.065 [2024-04-26 14:58:23.913657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915657] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e1c0 was disconnected and freed. reset controller. 00:21:24.065 [2024-04-26 14:58:23.915705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82eac0 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.915730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81ea00 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.915784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80e940 len:0x10000 key:0x187b00 00:21:24.065 [2024-04-26 14:58:23.915831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adeffc0 len:0x10000 key:0x188700 00:21:24.065 [2024-04-26 14:58:23.915877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff00 len:0x10000 key:0x188700 00:21:24.065 [2024-04-26 14:58:23.915930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcfe40 len:0x10000 key:0x188700 00:21:24.065 [2024-04-26 14:58:23.915988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfd80 len:0x10000 key:0x188700 00:21:24.065 [2024-04-26 14:58:23.916033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.916058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafcc0 len:0x10000 key:0x188700 00:21:24.065 [2024-04-26 14:58:23.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.065 [2024-04-26 14:58:23.916103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fc00 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fb40 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fa80 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6f9c0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5f900 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4f840 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3f780 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2f6c0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f600 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f540 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff480 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef3c0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf300 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf240 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf180 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf0c0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f000 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.916971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8ef40 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.916993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7ee80 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6edc0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5ed00 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4ec40 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3eb80 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2eac0 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1ea00 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0e940 len:0x10000 key:0x188700 00:21:24.066 [2024-04-26 14:58:23.917468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afeffc0 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff00 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcfe40 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfd80 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafcc0 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fc00 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fb40 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fa80 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6f9c0 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.917961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5f900 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.917983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.066 [2024-04-26 14:58:23.918016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4f840 len:0x10000 key:0x188a00 00:21:24.066 [2024-04-26 14:58:23.918038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3f780 len:0x10000 key:0x188a00 00:21:24.067 [2024-04-26 14:58:23.918096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2f6c0 len:0x10000 key:0x188a00 00:21:24.067 [2024-04-26 14:58:23.918156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefd80 len:0x10000 key:0x232f1 00:21:24.067 [2024-04-26 14:58:23.918212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f3f000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f60000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f81000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa2000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fc3000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fe4000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012005000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012026000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012047000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012068000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012089000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120aa000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cb000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ec000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210d000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212e000 len:0x10000 key:0x187800 00:21:24.067 [2024-04-26 14:58:23.918967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.920964] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60df40 was disconnected and freed. reset controller. 00:21:24.067 [2024-04-26 14:58:23.921031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f300 len:0x10000 key:0x188a00 00:21:24.067 [2024-04-26 14:58:23.921061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1effc0 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff00 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cfe40 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfd80 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afcc0 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fc00 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fb40 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fa80 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16f9c0 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15f900 len:0x10000 key:0x187a00 00:21:24.067 [2024-04-26 14:58:23.921576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.067 [2024-04-26 14:58:23.921601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14f840 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13f780 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12f6c0 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f600 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f540 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff480 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef3c0 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df300 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.921975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf240 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.921996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf180 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af0c0 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f000 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08ef40 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07ee80 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06edc0 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05ed00 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04ec40 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03eb80 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02eac0 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01ea00 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00e940 len:0x10000 key:0x187a00 00:21:24.068 [2024-04-26 14:58:23.922582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3effc0 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff00 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cfe40 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfd80 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afcc0 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fc00 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fb40 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fa80 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.922972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.922995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36f9c0 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35f900 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34f840 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33f780 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32f6c0 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f600 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f540 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff480 len:0x10000 key:0x187d00 00:21:24.068 [2024-04-26 14:58:23.923361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.068 [2024-04-26 14:58:23.923386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef3c0 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df300 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf240 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf180 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af0c0 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f000 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28ef40 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27ee80 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26edc0 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25ed00 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24ec40 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23eb80 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22eac0 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.923973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.923997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21ea00 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.924023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.924048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20e940 len:0x10000 key:0x187d00 00:21:24.069 [2024-04-26 14:58:23.924071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.924095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f3c0 len:0x10000 key:0x188a00 00:21:24.069 [2024-04-26 14:58:23.924117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926059] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60dcc0 was disconnected and freed. reset controller. 00:21:24.069 [2024-04-26 14:58:23.926119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfcc0 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfc00 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfb40 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afa80 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49f9c0 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48f900 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47f840 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46f780 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f6c0 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f600 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f540 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f480 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f3c0 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f300 len:0x10000 key:0x188600 00:21:24.069 [2024-04-26 14:58:23.926793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7effc0 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.926839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff00 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.926888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cfe40 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.926934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.926958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfd80 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.926980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.927005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afcc0 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.927026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.069 [2024-04-26 14:58:23.927051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fc00 len:0x10000 key:0x188000 00:21:24.069 [2024-04-26 14:58:23.927077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fb40 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fa80 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76f9c0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75f900 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74f840 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73f780 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72f6c0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f600 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f540 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff480 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef3c0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df300 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf240 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf180 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af0c0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f000 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68ef40 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67ee80 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.927961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.927986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66edc0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65ed00 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64ec40 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63eb80 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62eac0 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61ea00 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60e940 len:0x10000 key:0x188000 00:21:24.070 [2024-04-26 14:58:23.928337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9effc0 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff00 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cfe40 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfd80 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afcc0 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fc00 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fb40 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fa80 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96f9c0 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95f900 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.070 [2024-04-26 14:58:23.928841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94f840 len:0x10000 key:0x188900 00:21:24.070 [2024-04-26 14:58:23.928862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.928895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93f780 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.928916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.928940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92f6c0 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.928960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.928984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f600 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.929028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f540 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.929049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.929073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff480 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.929094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.929156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef3c0 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.929193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.929218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df300 len:0x10000 key:0x188900 00:21:24.071 [2024-04-26 14:58:23.929240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.929264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4efd80 len:0x10000 key:0x188600 00:21:24.071 [2024-04-26 14:58:23.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a990 sqhd:0000 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.934623] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60da40 was disconnected and freed. reset controller. 00:21:24.071 [2024-04-26 14:58:23.934829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.934877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.934903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.934924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.934945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.934965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.934984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.935003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.936787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.071 [2024-04-26 14:58:23.936820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:24.071 [2024-04-26 14:58:23.936858] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.071 [2024-04-26 14:58:23.936898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.936944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.936984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.937003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.937023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.937042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.938730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.071 [2024-04-26 14:58:23.938760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:24.071 [2024-04-26 14:58:23.938780] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.071 [2024-04-26 14:58:23.938832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.938858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.938879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.938918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.938943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.938964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.938983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.940563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.071 [2024-04-26 14:58:23.940592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:24.071 [2024-04-26 14:58:23.940612] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.071 [2024-04-26 14:58:23.940670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:0020 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.940715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.940735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:0020 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.940754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.940773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:0020 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.940793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.940812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:1 sqhd:0020 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.942231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.071 [2024-04-26 14:58:23.942260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:24.071 [2024-04-26 14:58:23.942279] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.071 [2024-04-26 14:58:23.942311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.942336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:10000 sqhd:0010 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.942357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.942377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:10000 sqhd:0010 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.942398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.942423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:10000 sqhd:0010 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.942459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.071 [2024-04-26 14:58:23.942478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32 cdw0:10000 sqhd:0010 p:0 m:0 dnr:0 00:21:24.071 [2024-04-26 14:58:23.943879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.943908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:24.072 [2024-04-26 14:58:23.943932] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.943980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.944006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64511 cdw0:1 sqhd:e3ff p:1 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.944027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.944047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64511 cdw0:1 sqhd:e3ff p:1 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.944068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.944087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64511 cdw0:1 sqhd:e3ff p:1 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.944122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.944153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:64511 cdw0:1 sqhd:e3ff p:1 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.945620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.945650] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.072 [2024-04-26 14:58:23.945669] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.945716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.945741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.945773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.945792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.945812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.945831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.945851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.945883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:10000 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.947284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.947313] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:24.072 [2024-04-26 14:58:23.947333] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.947363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.947388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.947410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.947450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.947480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.947499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.947519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.947547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.948922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.948961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:24.072 [2024-04-26 14:58:23.948984] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.949034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.949059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.949080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.949100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.949121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.949200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.949220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.950630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.950660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:24.072 [2024-04-26 14:58:23.950680] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.950727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.950752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.950774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.950794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.950814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.950833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.950853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.072 [2024-04-26 14:58:23.950877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.072 [2024-04-26 14:58:23.987940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:24.072 [2024-04-26 14:58:23.987974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:24.072 [2024-04-26 14:58:23.988022] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.999594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.072 [2024-04-26 14:58:23.999702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:24.072 [2024-04-26 14:58:23.999730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:24.072 [2024-04-26 14:58:23.999877] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.999914] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.999942] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:23.999970] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:24.000005] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:24.000032] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:24.000060] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.072 [2024-04-26 14:58:24.000292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:24.072 [2024-04-26 14:58:24.000329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:24.072 [2024-04-26 14:58:24.000365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:24.072 [2024-04-26 14:58:24.000403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:24.072 [2024-04-26 14:58:24.005459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:24.072 task offset: 44032 on job bdev=Nvme1n1 fails 00:21:24.072 00:21:24.072 Latency(us) 00:21:24.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.072 Job: Nvme1n1 ended in about 2.42 seconds with error 00:21:24.072 Verification LBA range: start 0x0 length 0x400 00:21:24.072 Nvme1n1 : 2.42 132.41 8.28 26.48 0.00 400386.53 53982.25 1087412.15 00:21:24.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.072 Job: Nvme2n1 ended in about 2.42 seconds with error 00:21:24.072 Verification LBA range: start 0x0 length 0x400 00:21:24.072 Nvme2n1 : 2.42 134.41 8.40 26.47 0.00 392132.22 5776.88 1093625.93 00:21:24.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.072 Job: Nvme3n1 ended in about 2.42 seconds with error 00:21:24.072 Verification LBA range: start 0x0 length 0x400 00:21:24.072 Nvme3n1 : 2.42 145.51 9.09 26.46 0.00 363827.14 8689.59 1093625.93 00:21:24.072 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme4n1 ended in about 2.42 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme4n1 : 2.42 132.22 8.26 26.44 0.00 391259.28 19612.25 1242756.74 00:21:24.073 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme5n1 ended in about 2.42 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme5n1 : 2.42 132.15 8.26 26.43 0.00 388185.00 26408.58 1236542.96 00:21:24.073 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme6n1 ended in about 2.42 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme6n1 : 2.42 132.09 8.26 26.42 0.00 385123.11 30098.01 1217901.61 00:21:24.073 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme7n1 ended in about 2.42 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme7n1 : 2.42 132.03 8.25 26.41 0.00 381988.41 36311.80 1199260.25 00:21:24.073 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme8n1 ended in about 2.42 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme8n1 : 2.42 131.97 8.25 26.39 0.00 378820.33 42331.40 1180618.90 00:21:24.073 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme9n1 ended in about 2.43 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme9n1 : 2.43 131.91 8.24 26.38 0.00 375781.20 73011.96 1161977.55 00:21:24.073 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.073 Job: Nvme10n1 ended in about 2.43 seconds with error 00:21:24.073 Verification LBA range: start 0x0 length 0x400 00:21:24.073 Nvme10n1 : 2.43 105.48 6.59 26.37 0.00 446959.16 73788.68 1143336.20 00:21:24.073 =================================================================================================================== 00:21:24.073 Total : 1310.16 81.89 264.25 0.00 389276.50 5776.88 1242756.74 00:21:24.073 [2024-04-26 14:58:24.088298] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:24.073 [2024-04-26 14:58:24.088398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:24.073 [2024-04-26 14:58:24.088468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:24.073 [2024-04-26 14:58:24.099011] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.099048] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.099079] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:21:24.073 [2024-04-26 14:58:24.099183] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.099211] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.099229] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20000b1ff280 00:21:24.073 [2024-04-26 14:58:24.099298] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.099327] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.099344] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199e61c0 00:21:24.073 [2024-04-26 14:58:24.102750] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.102784] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.102809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199bad40 00:21:24.073 [2024-04-26 14:58:24.102909] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.102945] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.102963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199be1c0 00:21:24.073 [2024-04-26 14:58:24.103035] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.103063] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.103079] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199c0040 00:21:24.073 [2024-04-26 14:58:24.103152] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.103181] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.103198] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199a3e40 00:21:24.073 [2024-04-26 14:58:24.103811] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.103842] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.103862] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199a5c40 00:21:24.073 [2024-04-26 14:58:24.103933] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.103960] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.103977] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001998b8c0 00:21:24.073 [2024-04-26 14:58:24.104043] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:24.073 [2024-04-26 14:58:24.104070] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:24.073 [2024-04-26 14:58:24.104087] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000199820c0 00:21:25.451 [2024-04-26 14:58:25.103505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.451 [2024-04-26 14:58:25.103576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.451 [2024-04-26 14:58:25.104923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.451 [2024-04-26 14:58:25.104954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:25.451 [2024-04-26 14:58:25.106215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.451 [2024-04-26 14:58:25.106249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:25.451 [2024-04-26 14:58:25.107476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.451 [2024-04-26 14:58:25.107506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:25.451 [2024-04-26 14:58:25.108809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.451 [2024-04-26 14:58:25.108838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:25.452 [2024-04-26 14:58:25.110038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.452 [2024-04-26 14:58:25.110068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:25.452 [2024-04-26 14:58:25.111296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.452 [2024-04-26 14:58:25.111326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:25.452 [2024-04-26 14:58:25.111349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.111371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.111393] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:25.452 [2024-04-26 14:58:25.111442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.111462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.111481] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:21:25.452 [2024-04-26 14:58:25.111508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.111529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.111547] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:21:25.452 [2024-04-26 14:58:25.112725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.452 [2024-04-26 14:58:25.112755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:25.452 [2024-04-26 14:58:25.113906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.452 [2024-04-26 14:58:25.113935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:25.452 [2024-04-26 14:58:25.115178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:25.452 [2024-04-26 14:58:25.115207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:25.452 [2024-04-26 14:58:25.115303] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115345] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115374] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115447] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:21:25.452 [2024-04-26 14:58:25.115474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115512] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:21:25.452 [2024-04-26 14:58:25.115538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115577] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:21:25.452 [2024-04-26 14:58:25.115603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115646] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:21:25.452 [2024-04-26 14:58:25.115775] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115808] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115834] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.115882] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115921] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] already in failed state 00:21:25.452 [2024-04-26 14:58:25.115947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.115967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.115985] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] already in failed state 00:21:25.452 [2024-04-26 14:58:25.116011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:25.452 [2024-04-26 14:58:25.116030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:25.452 [2024-04-26 14:58:25.116048] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:21:25.452 [2024-04-26 14:58:25.116383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.116415] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.452 [2024-04-26 14:58:25.116439] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.389 14:58:26 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:26.389 14:58:26 -- target/shutdown.sh@139 -- # sleep 1 00:21:27.325 14:58:27 -- target/shutdown.sh@142 -- # kill -9 266356 00:21:27.325 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (266356) - No such process 00:21:27.325 14:58:27 -- target/shutdown.sh@142 -- # true 00:21:27.325 14:58:27 -- target/shutdown.sh@144 -- # stoptarget 00:21:27.325 14:58:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:27.325 14:58:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.325 14:58:27 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.325 14:58:27 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:27.325 14:58:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:27.325 14:58:27 -- nvmf/common.sh@117 -- # sync 00:21:27.325 14:58:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:27.325 14:58:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:27.325 14:58:27 -- nvmf/common.sh@120 -- # set +e 00:21:27.325 14:58:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.325 14:58:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:27.325 rmmod nvme_rdma 00:21:27.325 rmmod nvme_fabrics 00:21:27.325 14:58:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.325 14:58:27 -- nvmf/common.sh@124 -- # set -e 00:21:27.325 14:58:27 -- nvmf/common.sh@125 -- # return 0 00:21:27.325 14:58:27 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:21:27.325 14:58:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:27.325 14:58:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:27.325 00:21:27.325 real 0m9.852s 00:21:27.325 user 0m36.208s 00:21:27.325 sys 0m1.785s 00:21:27.325 14:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.325 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.325 ************************************ 00:21:27.325 END TEST nvmf_shutdown_tc3 00:21:27.325 ************************************ 00:21:27.325 14:58:27 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:27.325 00:21:27.325 real 0m35.659s 00:21:27.325 user 2m9.561s 00:21:27.325 sys 0m6.454s 00:21:27.325 14:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.325 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.325 ************************************ 00:21:27.325 END TEST nvmf_shutdown 00:21:27.325 ************************************ 00:21:27.325 14:58:27 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:21:27.325 14:58:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:27.325 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.325 14:58:27 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:21:27.325 14:58:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:27.325 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.325 14:58:27 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:21:27.325 14:58:27 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:27.325 14:58:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:27.325 14:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.325 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.583 ************************************ 00:21:27.583 START TEST nvmf_multicontroller 00:21:27.583 ************************************ 00:21:27.583 14:58:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:27.583 * Looking for test storage... 00:21:27.583 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:27.583 14:58:27 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.584 14:58:27 -- nvmf/common.sh@7 -- # uname -s 00:21:27.584 14:58:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.584 14:58:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.584 14:58:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.584 14:58:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.584 14:58:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.584 14:58:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.584 14:58:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.584 14:58:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.584 14:58:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.584 14:58:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.584 14:58:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:27.584 14:58:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:27.584 14:58:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.584 14:58:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.584 14:58:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.584 14:58:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.584 14:58:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:27.584 14:58:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.584 14:58:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.584 14:58:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.584 14:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.584 14:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.584 14:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.584 14:58:27 -- paths/export.sh@5 -- # export PATH 00:21:27.584 14:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.584 14:58:27 -- nvmf/common.sh@47 -- # : 0 00:21:27.584 14:58:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.584 14:58:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.584 14:58:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.584 14:58:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.584 14:58:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.584 14:58:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.584 14:58:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.584 14:58:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.584 14:58:27 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.584 14:58:27 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.584 14:58:27 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:27.584 14:58:27 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:27.584 14:58:27 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.584 14:58:27 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:21:27.584 14:58:27 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:27.584 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:27.584 14:58:27 -- host/multicontroller.sh@20 -- # exit 0 00:21:27.584 00:21:27.584 real 0m0.065s 00:21:27.584 user 0m0.028s 00:21:27.584 sys 0m0.043s 00:21:27.584 14:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.584 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.584 ************************************ 00:21:27.584 END TEST nvmf_multicontroller 00:21:27.584 ************************************ 00:21:27.584 14:58:27 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:27.584 14:58:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:27.584 14:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.584 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:27.584 ************************************ 00:21:27.584 START TEST nvmf_aer 00:21:27.584 ************************************ 00:21:27.584 14:58:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:27.843 * Looking for test storage... 00:21:27.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:27.843 14:58:27 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.843 14:58:27 -- nvmf/common.sh@7 -- # uname -s 00:21:27.843 14:58:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.843 14:58:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.843 14:58:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.843 14:58:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.843 14:58:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.843 14:58:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.843 14:58:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.843 14:58:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.843 14:58:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.843 14:58:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.843 14:58:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:27.843 14:58:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:27.843 14:58:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.843 14:58:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.843 14:58:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.843 14:58:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.843 14:58:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:27.843 14:58:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.843 14:58:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.843 14:58:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.843 14:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.843 14:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.843 14:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.843 14:58:27 -- paths/export.sh@5 -- # export PATH 00:21:27.843 14:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.843 14:58:27 -- nvmf/common.sh@47 -- # : 0 00:21:27.843 14:58:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.843 14:58:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.843 14:58:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.843 14:58:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.843 14:58:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.843 14:58:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.843 14:58:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.843 14:58:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.843 14:58:27 -- host/aer.sh@11 -- # nvmftestinit 00:21:27.843 14:58:27 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:27.843 14:58:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.843 14:58:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:27.843 14:58:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:27.843 14:58:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:27.843 14:58:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.843 14:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.843 14:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.844 14:58:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:27.844 14:58:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:27.844 14:58:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.844 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:29.748 14:58:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.748 14:58:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.748 14:58:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.748 14:58:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.748 14:58:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.748 14:58:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.748 14:58:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.748 14:58:29 -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.748 14:58:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.748 14:58:29 -- nvmf/common.sh@296 -- # e810=() 00:21:29.748 14:58:29 -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.748 14:58:29 -- nvmf/common.sh@297 -- # x722=() 00:21:29.748 14:58:29 -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.748 14:58:29 -- nvmf/common.sh@298 -- # mlx=() 00:21:29.748 14:58:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.748 14:58:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.748 14:58:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.748 14:58:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:29.748 14:58:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:29.748 14:58:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:29.748 14:58:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:29.748 14:58:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:29.748 14:58:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.748 14:58:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.748 14:58:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:29.748 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:29.748 14:58:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:29.748 14:58:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:29.748 14:58:29 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:29.748 14:58:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.748 14:58:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.748 14:58:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:29.748 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:29.748 14:58:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.749 14:58:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.749 14:58:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.749 14:58:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:29.749 Found net devices under 0000:09:00.0: mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.749 14:58:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.749 14:58:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.749 14:58:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:29.749 Found net devices under 0000:09:00.1: mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.749 14:58:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:29.749 14:58:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:29.749 14:58:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:29.749 14:58:29 -- nvmf/common.sh@58 -- # uname 00:21:29.749 14:58:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:29.749 14:58:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:29.749 14:58:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:29.749 14:58:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:29.749 14:58:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:29.749 14:58:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:29.749 14:58:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:29.749 14:58:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:29.749 14:58:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:29.749 14:58:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.749 14:58:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:29.749 14:58:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.749 14:58:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:29.749 14:58:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:29.749 14:58:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.749 14:58:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@105 -- # continue 2 00:21:29.749 14:58:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@105 -- # continue 2 00:21:29.749 14:58:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:29.749 14:58:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.749 14:58:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:29.749 14:58:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:29.749 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.749 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:21:29.749 altname enp9s0f0np0 00:21:29.749 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.749 valid_lft forever preferred_lft forever 00:21:29.749 14:58:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:29.749 14:58:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.749 14:58:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:29.749 14:58:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:29.749 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.749 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:21:29.749 altname enp9s0f1np1 00:21:29.749 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.749 valid_lft forever preferred_lft forever 00:21:29.749 14:58:29 -- nvmf/common.sh@411 -- # return 0 00:21:29.749 14:58:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:29.749 14:58:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.749 14:58:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:29.749 14:58:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:29.749 14:58:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.749 14:58:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:29.749 14:58:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:29.749 14:58:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.749 14:58:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:29.749 14:58:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@105 -- # continue 2 00:21:29.749 14:58:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.749 14:58:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.749 14:58:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@105 -- # continue 2 00:21:29.749 14:58:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:29.749 14:58:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.749 14:58:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:29.749 14:58:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:29.749 14:58:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:29.749 14:58:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.749 192.168.100.9' 00:21:29.749 14:58:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:29.749 192.168.100.9' 00:21:29.749 14:58:29 -- nvmf/common.sh@446 -- # head -n 1 00:21:29.749 14:58:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.749 14:58:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:29.749 192.168.100.9' 00:21:29.749 14:58:29 -- nvmf/common.sh@447 -- # tail -n +2 00:21:29.749 14:58:29 -- nvmf/common.sh@447 -- # head -n 1 00:21:29.749 14:58:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.749 14:58:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:29.749 14:58:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.749 14:58:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:29.749 14:58:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:29.749 14:58:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:29.749 14:58:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:29.749 14:58:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:29.749 14:58:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.749 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:21:29.749 14:58:29 -- nvmf/common.sh@470 -- # nvmfpid=269067 00:21:29.749 14:58:29 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.749 14:58:29 -- nvmf/common.sh@471 -- # waitforlisten 269067 00:21:29.749 14:58:29 -- common/autotest_common.sh@817 -- # '[' -z 269067 ']' 00:21:29.749 14:58:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.749 14:58:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:29.749 14:58:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.749 14:58:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:29.749 14:58:29 -- common/autotest_common.sh@10 -- # set +x 00:21:29.749 [2024-04-26 14:58:29.723265] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:29.749 [2024-04-26 14:58:29.723413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.749 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.009 [2024-04-26 14:58:29.845249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.272 [2024-04-26 14:58:30.098624] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.272 [2024-04-26 14:58:30.098693] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.272 [2024-04-26 14:58:30.098722] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.272 [2024-04-26 14:58:30.098757] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.272 [2024-04-26 14:58:30.098775] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.272 [2024-04-26 14:58:30.098901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.272 [2024-04-26 14:58:30.098965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.272 [2024-04-26 14:58:30.099048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.272 [2024-04-26 14:58:30.099051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.590 14:58:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:30.590 14:58:30 -- common/autotest_common.sh@850 -- # return 0 00:21:30.590 14:58:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:30.590 14:58:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:30.590 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.874 14:58:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.874 14:58:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:30.874 14:58:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.874 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:30.874 [2024-04-26 14:58:30.688830] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7fc75ef26940) succeed. 00:21:30.874 [2024-04-26 14:58:30.700124] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7fc75eee2940) succeed. 00:21:31.160 14:58:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.160 14:58:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:31.160 14:58:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.160 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:31.160 Malloc0 00:21:31.160 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.160 14:58:31 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:31.160 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.160 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.160 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.160 14:58:31 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.160 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.160 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.160 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.160 14:58:31 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:31.160 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.160 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.160 [2024-04-26 14:58:31.096206] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:31.160 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.160 14:58:31 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:31.160 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.160 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.160 [2024-04-26 14:58:31.103897] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:31.160 [ 00:21:31.160 { 00:21:31.160 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:31.160 "subtype": "Discovery", 00:21:31.160 "listen_addresses": [], 00:21:31.160 "allow_any_host": true, 00:21:31.160 "hosts": [] 00:21:31.160 }, 00:21:31.160 { 00:21:31.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.160 "subtype": "NVMe", 00:21:31.160 "listen_addresses": [ 00:21:31.160 { 00:21:31.160 "transport": "RDMA", 00:21:31.160 "trtype": "RDMA", 00:21:31.160 "adrfam": "IPv4", 00:21:31.160 "traddr": "192.168.100.8", 00:21:31.160 "trsvcid": "4420" 00:21:31.160 } 00:21:31.160 ], 00:21:31.160 "allow_any_host": true, 00:21:31.160 "hosts": [], 00:21:31.160 "serial_number": "SPDK00000000000001", 00:21:31.160 "model_number": "SPDK bdev Controller", 00:21:31.160 "max_namespaces": 2, 00:21:31.160 "min_cntlid": 1, 00:21:31.160 "max_cntlid": 65519, 00:21:31.160 "namespaces": [ 00:21:31.160 { 00:21:31.160 "nsid": 1, 00:21:31.160 "bdev_name": "Malloc0", 00:21:31.160 "name": "Malloc0", 00:21:31.160 "nguid": "6FBEFE2C351046F0802F82113E6EDFA1", 00:21:31.160 "uuid": "6fbefe2c-3510-46f0-802f-82113e6edfa1" 00:21:31.160 } 00:21:31.160 ] 00:21:31.160 } 00:21:31.160 ] 00:21:31.161 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.161 14:58:31 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:31.161 14:58:31 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:31.161 14:58:31 -- host/aer.sh@33 -- # aerpid=269235 00:21:31.161 14:58:31 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:31.161 14:58:31 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:31.161 14:58:31 -- common/autotest_common.sh@1251 -- # local i=0 00:21:31.161 14:58:31 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:31.161 14:58:31 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:31.161 14:58:31 -- common/autotest_common.sh@1254 -- # i=1 00:21:31.161 14:58:31 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:31.161 14:58:31 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:31.161 14:58:31 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:31.161 14:58:31 -- common/autotest_common.sh@1254 -- # i=2 00:21:31.161 14:58:31 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:31.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.441 14:58:31 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:31.441 14:58:31 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:21:31.441 14:58:31 -- common/autotest_common.sh@1254 -- # i=3 00:21:31.441 14:58:31 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:31.441 14:58:31 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:31.441 14:58:31 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:31.441 14:58:31 -- common/autotest_common.sh@1262 -- # return 0 00:21:31.441 14:58:31 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:31.441 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.441 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.705 Malloc1 00:21:31.705 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.705 14:58:31 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:31.705 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.705 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.705 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.705 14:58:31 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:31.705 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.705 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.705 [ 00:21:31.705 { 00:21:31.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:31.705 "subtype": "Discovery", 00:21:31.705 "listen_addresses": [], 00:21:31.705 "allow_any_host": true, 00:21:31.705 "hosts": [] 00:21:31.705 }, 00:21:31.705 { 00:21:31.705 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.705 "subtype": "NVMe", 00:21:31.705 "listen_addresses": [ 00:21:31.705 { 00:21:31.705 "transport": "RDMA", 00:21:31.705 "trtype": "RDMA", 00:21:31.705 "adrfam": "IPv4", 00:21:31.705 "traddr": "192.168.100.8", 00:21:31.705 "trsvcid": "4420" 00:21:31.705 } 00:21:31.705 ], 00:21:31.705 "allow_any_host": true, 00:21:31.705 "hosts": [], 00:21:31.705 "serial_number": "SPDK00000000000001", 00:21:31.705 "model_number": "SPDK bdev Controller", 00:21:31.705 "max_namespaces": 2, 00:21:31.705 "min_cntlid": 1, 00:21:31.705 "max_cntlid": 65519, 00:21:31.705 "namespaces": [ 00:21:31.705 { 00:21:31.705 "nsid": 1, 00:21:31.705 "bdev_name": "Malloc0", 00:21:31.705 "name": "Malloc0", 00:21:31.705 "nguid": "6FBEFE2C351046F0802F82113E6EDFA1", 00:21:31.705 "uuid": "6fbefe2c-3510-46f0-802f-82113e6edfa1" 00:21:31.705 }, 00:21:31.705 { 00:21:31.705 "nsid": 2, 00:21:31.705 "bdev_name": "Malloc1", 00:21:31.705 "name": "Malloc1", 00:21:31.705 "nguid": "0017D7E03DA2487CA35D37E4C960C1EF", 00:21:31.705 "uuid": "0017d7e0-3da2-487c-a35d-37e4c960c1ef" 00:21:31.706 } 00:21:31.706 ] 00:21:31.706 } 00:21:31.706 ] 00:21:31.706 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.706 14:58:31 -- host/aer.sh@43 -- # wait 269235 00:21:31.706 Asynchronous Event Request test 00:21:31.706 Attaching to 192.168.100.8 00:21:31.706 Attached to 192.168.100.8 00:21:31.706 Registering asynchronous event callbacks... 00:21:31.706 Starting namespace attribute notice tests for all controllers... 00:21:31.706 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:31.706 aer_cb - Changed Namespace 00:21:31.706 Cleaning up... 00:21:31.706 14:58:31 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:31.706 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.706 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:31.965 14:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.965 14:58:31 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:31.965 14:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.965 14:58:31 -- common/autotest_common.sh@10 -- # set +x 00:21:32.225 14:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.225 14:58:32 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.225 14:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.225 14:58:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.225 14:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.226 14:58:32 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:32.226 14:58:32 -- host/aer.sh@51 -- # nvmftestfini 00:21:32.226 14:58:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:32.226 14:58:32 -- nvmf/common.sh@117 -- # sync 00:21:32.226 14:58:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:32.226 14:58:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:32.226 14:58:32 -- nvmf/common.sh@120 -- # set +e 00:21:32.226 14:58:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.226 14:58:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:32.226 rmmod nvme_rdma 00:21:32.226 rmmod nvme_fabrics 00:21:32.226 14:58:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.226 14:58:32 -- nvmf/common.sh@124 -- # set -e 00:21:32.226 14:58:32 -- nvmf/common.sh@125 -- # return 0 00:21:32.226 14:58:32 -- nvmf/common.sh@478 -- # '[' -n 269067 ']' 00:21:32.226 14:58:32 -- nvmf/common.sh@479 -- # killprocess 269067 00:21:32.226 14:58:32 -- common/autotest_common.sh@936 -- # '[' -z 269067 ']' 00:21:32.226 14:58:32 -- common/autotest_common.sh@940 -- # kill -0 269067 00:21:32.226 14:58:32 -- common/autotest_common.sh@941 -- # uname 00:21:32.226 14:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.226 14:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 269067 00:21:32.226 14:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:32.226 14:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:32.226 14:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 269067' 00:21:32.226 killing process with pid 269067 00:21:32.226 14:58:32 -- common/autotest_common.sh@955 -- # kill 269067 00:21:32.226 [2024-04-26 14:58:32.149278] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:32.226 14:58:32 -- common/autotest_common.sh@960 -- # wait 269067 00:21:32.791 [2024-04-26 14:58:32.685375] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:34.188 14:58:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:34.188 14:58:33 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:34.188 00:21:34.188 real 0m6.301s 00:21:34.188 user 0m13.704s 00:21:34.188 sys 0m2.124s 00:21:34.188 14:58:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:34.188 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.188 ************************************ 00:21:34.188 END TEST nvmf_aer 00:21:34.189 ************************************ 00:21:34.189 14:58:33 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:34.189 14:58:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:34.189 14:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:34.189 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.189 ************************************ 00:21:34.189 START TEST nvmf_async_init 00:21:34.189 ************************************ 00:21:34.189 14:58:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:34.189 * Looking for test storage... 00:21:34.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:34.189 14:58:34 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.189 14:58:34 -- nvmf/common.sh@7 -- # uname -s 00:21:34.189 14:58:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.189 14:58:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.189 14:58:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.189 14:58:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.189 14:58:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.189 14:58:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.189 14:58:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.189 14:58:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.189 14:58:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.189 14:58:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.189 14:58:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:34.189 14:58:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:34.189 14:58:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.189 14:58:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.189 14:58:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.189 14:58:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.189 14:58:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:34.189 14:58:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.189 14:58:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.189 14:58:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.189 14:58:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.189 14:58:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.189 14:58:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.189 14:58:34 -- paths/export.sh@5 -- # export PATH 00:21:34.189 14:58:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.189 14:58:34 -- nvmf/common.sh@47 -- # : 0 00:21:34.189 14:58:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.189 14:58:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.189 14:58:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.189 14:58:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.189 14:58:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.189 14:58:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.189 14:58:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.189 14:58:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.189 14:58:34 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:34.189 14:58:34 -- host/async_init.sh@14 -- # null_block_size=512 00:21:34.189 14:58:34 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:34.189 14:58:34 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:34.189 14:58:34 -- host/async_init.sh@20 -- # uuidgen 00:21:34.189 14:58:34 -- host/async_init.sh@20 -- # tr -d - 00:21:34.189 14:58:34 -- host/async_init.sh@20 -- # nguid=5ca1bbe0952c4396ac0d3197268991c9 00:21:34.189 14:58:34 -- host/async_init.sh@22 -- # nvmftestinit 00:21:34.189 14:58:34 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:34.189 14:58:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.189 14:58:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:34.189 14:58:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:34.189 14:58:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:34.189 14:58:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.189 14:58:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.189 14:58:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.189 14:58:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:34.189 14:58:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:34.189 14:58:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.189 14:58:34 -- common/autotest_common.sh@10 -- # set +x 00:21:36.091 14:58:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:36.091 14:58:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.091 14:58:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.091 14:58:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.091 14:58:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.091 14:58:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.091 14:58:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.091 14:58:36 -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.092 14:58:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.092 14:58:36 -- nvmf/common.sh@296 -- # e810=() 00:21:36.092 14:58:36 -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.092 14:58:36 -- nvmf/common.sh@297 -- # x722=() 00:21:36.092 14:58:36 -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.092 14:58:36 -- nvmf/common.sh@298 -- # mlx=() 00:21:36.092 14:58:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.092 14:58:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.092 14:58:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:36.092 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:36.092 14:58:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:36.092 14:58:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:36.092 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:36.092 14:58:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:36.092 14:58:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.092 14:58:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.092 14:58:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:36.092 Found net devices under 0000:09:00.0: mlx_0_0 00:21:36.092 14:58:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.092 14:58:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.092 14:58:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:36.092 Found net devices under 0000:09:00.1: mlx_0_1 00:21:36.092 14:58:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.092 14:58:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:36.092 14:58:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:36.092 14:58:36 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:36.092 14:58:36 -- nvmf/common.sh@58 -- # uname 00:21:36.092 14:58:36 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:36.092 14:58:36 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:36.092 14:58:36 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:36.092 14:58:36 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:36.092 14:58:36 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:36.092 14:58:36 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:36.092 14:58:36 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:36.092 14:58:36 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:36.092 14:58:36 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:36.092 14:58:36 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:36.092 14:58:36 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:36.092 14:58:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:36.092 14:58:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:36.092 14:58:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:36.092 14:58:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:36.092 14:58:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:36.092 14:58:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:36.092 14:58:36 -- nvmf/common.sh@105 -- # continue 2 00:21:36.092 14:58:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.092 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:36.092 14:58:36 -- nvmf/common.sh@105 -- # continue 2 00:21:36.092 14:58:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:36.092 14:58:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:36.092 14:58:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:36.092 14:58:36 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:36.092 14:58:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:36.092 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:36.092 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:21:36.092 altname enp9s0f0np0 00:21:36.092 inet 192.168.100.8/24 scope global mlx_0_0 00:21:36.092 valid_lft forever preferred_lft forever 00:21:36.092 14:58:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:36.092 14:58:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:36.092 14:58:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:36.092 14:58:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:36.092 14:58:36 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:36.092 14:58:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:36.092 14:58:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:36.092 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:36.092 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:21:36.093 altname enp9s0f1np1 00:21:36.093 inet 192.168.100.9/24 scope global mlx_0_1 00:21:36.093 valid_lft forever preferred_lft forever 00:21:36.093 14:58:36 -- nvmf/common.sh@411 -- # return 0 00:21:36.093 14:58:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:36.093 14:58:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:36.093 14:58:36 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:36.093 14:58:36 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:36.093 14:58:36 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:36.093 14:58:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:36.093 14:58:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:36.093 14:58:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:36.093 14:58:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:36.093 14:58:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:36.093 14:58:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:36.093 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.093 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:36.093 14:58:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:36.093 14:58:36 -- nvmf/common.sh@105 -- # continue 2 00:21:36.093 14:58:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:36.093 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.093 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:36.093 14:58:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:36.093 14:58:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:36.093 14:58:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:36.093 14:58:36 -- nvmf/common.sh@105 -- # continue 2 00:21:36.093 14:58:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:36.093 14:58:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:36.093 14:58:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:36.093 14:58:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:36.093 14:58:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:36.093 14:58:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:36.093 14:58:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:36.093 14:58:36 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:36.093 192.168.100.9' 00:21:36.093 14:58:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:36.093 192.168.100.9' 00:21:36.093 14:58:36 -- nvmf/common.sh@446 -- # head -n 1 00:21:36.093 14:58:36 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:36.093 14:58:36 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:36.093 192.168.100.9' 00:21:36.093 14:58:36 -- nvmf/common.sh@447 -- # tail -n +2 00:21:36.093 14:58:36 -- nvmf/common.sh@447 -- # head -n 1 00:21:36.093 14:58:36 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:36.093 14:58:36 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:36.093 14:58:36 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:36.093 14:58:36 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:36.093 14:58:36 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:36.093 14:58:36 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:36.093 14:58:36 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:36.093 14:58:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:36.093 14:58:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:36.093 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.093 14:58:36 -- nvmf/common.sh@470 -- # nvmfpid=271184 00:21:36.093 14:58:36 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:36.093 14:58:36 -- nvmf/common.sh@471 -- # waitforlisten 271184 00:21:36.093 14:58:36 -- common/autotest_common.sh@817 -- # '[' -z 271184 ']' 00:21:36.093 14:58:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.093 14:58:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:36.093 14:58:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.093 14:58:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:36.093 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.376 [2024-04-26 14:58:36.243313] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:36.377 [2024-04-26 14:58:36.243462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.377 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.377 [2024-04-26 14:58:36.371686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.635 [2024-04-26 14:58:36.592036] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.635 [2024-04-26 14:58:36.592125] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.635 [2024-04-26 14:58:36.592155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.635 [2024-04-26 14:58:36.592176] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.635 [2024-04-26 14:58:36.592192] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.635 [2024-04-26 14:58:36.592240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.201 14:58:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.201 14:58:37 -- common/autotest_common.sh@850 -- # return 0 00:21:37.201 14:58:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:37.201 14:58:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.201 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.201 14:58:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.201 14:58:37 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:37.201 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.201 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.201 [2024-04-26 14:58:37.209586] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027940/0x7f29c3160940) succeed. 00:21:37.201 [2024-04-26 14:58:37.222662] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027ac0/0x7f29c3119940) succeed. 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 null0 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5ca1bbe0952c4396ac0d3197268991c9 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 [2024-04-26 14:58:37.350247] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 nvme0n1 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 [ 00:21:37.459 { 00:21:37.459 "name": "nvme0n1", 00:21:37.459 "aliases": [ 00:21:37.459 "5ca1bbe0-952c-4396-ac0d-3197268991c9" 00:21:37.459 ], 00:21:37.459 "product_name": "NVMe disk", 00:21:37.459 "block_size": 512, 00:21:37.459 "num_blocks": 2097152, 00:21:37.459 "uuid": "5ca1bbe0-952c-4396-ac0d-3197268991c9", 00:21:37.459 "assigned_rate_limits": { 00:21:37.459 "rw_ios_per_sec": 0, 00:21:37.459 "rw_mbytes_per_sec": 0, 00:21:37.459 "r_mbytes_per_sec": 0, 00:21:37.459 "w_mbytes_per_sec": 0 00:21:37.459 }, 00:21:37.459 "claimed": false, 00:21:37.459 "zoned": false, 00:21:37.459 "supported_io_types": { 00:21:37.459 "read": true, 00:21:37.459 "write": true, 00:21:37.459 "unmap": false, 00:21:37.459 "write_zeroes": true, 00:21:37.459 "flush": true, 00:21:37.459 "reset": true, 00:21:37.459 "compare": true, 00:21:37.459 "compare_and_write": true, 00:21:37.459 "abort": true, 00:21:37.459 "nvme_admin": true, 00:21:37.459 "nvme_io": true 00:21:37.459 }, 00:21:37.459 "memory_domains": [ 00:21:37.459 { 00:21:37.459 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:37.459 "dma_device_type": 0 00:21:37.459 } 00:21:37.459 ], 00:21:37.459 "driver_specific": { 00:21:37.459 "nvme": [ 00:21:37.459 { 00:21:37.459 "trid": { 00:21:37.459 "trtype": "RDMA", 00:21:37.459 "adrfam": "IPv4", 00:21:37.459 "traddr": "192.168.100.8", 00:21:37.459 "trsvcid": "4420", 00:21:37.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.459 }, 00:21:37.459 "ctrlr_data": { 00:21:37.459 "cntlid": 1, 00:21:37.459 "vendor_id": "0x8086", 00:21:37.459 "model_number": "SPDK bdev Controller", 00:21:37.459 "serial_number": "00000000000000000000", 00:21:37.459 "firmware_revision": "24.05", 00:21:37.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.459 "oacs": { 00:21:37.459 "security": 0, 00:21:37.459 "format": 0, 00:21:37.459 "firmware": 0, 00:21:37.459 "ns_manage": 0 00:21:37.459 }, 00:21:37.459 "multi_ctrlr": true, 00:21:37.459 "ana_reporting": false 00:21:37.459 }, 00:21:37.459 "vs": { 00:21:37.459 "nvme_version": "1.3" 00:21:37.459 }, 00:21:37.459 "ns_data": { 00:21:37.459 "id": 1, 00:21:37.459 "can_share": true 00:21:37.459 } 00:21:37.459 } 00:21:37.459 ], 00:21:37.459 "mp_policy": "active_passive" 00:21:37.459 } 00:21:37.459 } 00:21:37.459 ] 00:21:37.459 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.459 14:58:37 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:37.459 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.459 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.459 [2024-04-26 14:58:37.470469] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.459 [2024-04-26 14:58:37.514979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.716 [2024-04-26 14:58:37.541715] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.716 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.716 14:58:37 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.716 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.716 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.716 [ 00:21:37.716 { 00:21:37.716 "name": "nvme0n1", 00:21:37.716 "aliases": [ 00:21:37.716 "5ca1bbe0-952c-4396-ac0d-3197268991c9" 00:21:37.716 ], 00:21:37.716 "product_name": "NVMe disk", 00:21:37.716 "block_size": 512, 00:21:37.716 "num_blocks": 2097152, 00:21:37.716 "uuid": "5ca1bbe0-952c-4396-ac0d-3197268991c9", 00:21:37.716 "assigned_rate_limits": { 00:21:37.717 "rw_ios_per_sec": 0, 00:21:37.717 "rw_mbytes_per_sec": 0, 00:21:37.717 "r_mbytes_per_sec": 0, 00:21:37.717 "w_mbytes_per_sec": 0 00:21:37.717 }, 00:21:37.717 "claimed": false, 00:21:37.717 "zoned": false, 00:21:37.717 "supported_io_types": { 00:21:37.717 "read": true, 00:21:37.717 "write": true, 00:21:37.717 "unmap": false, 00:21:37.717 "write_zeroes": true, 00:21:37.717 "flush": true, 00:21:37.717 "reset": true, 00:21:37.717 "compare": true, 00:21:37.717 "compare_and_write": true, 00:21:37.717 "abort": true, 00:21:37.717 "nvme_admin": true, 00:21:37.717 "nvme_io": true 00:21:37.717 }, 00:21:37.717 "memory_domains": [ 00:21:37.717 { 00:21:37.717 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:37.717 "dma_device_type": 0 00:21:37.717 } 00:21:37.717 ], 00:21:37.717 "driver_specific": { 00:21:37.717 "nvme": [ 00:21:37.717 { 00:21:37.717 "trid": { 00:21:37.717 "trtype": "RDMA", 00:21:37.717 "adrfam": "IPv4", 00:21:37.717 "traddr": "192.168.100.8", 00:21:37.717 "trsvcid": "4420", 00:21:37.717 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.717 }, 00:21:37.717 "ctrlr_data": { 00:21:37.717 "cntlid": 2, 00:21:37.717 "vendor_id": "0x8086", 00:21:37.717 "model_number": "SPDK bdev Controller", 00:21:37.717 "serial_number": "00000000000000000000", 00:21:37.717 "firmware_revision": "24.05", 00:21:37.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.717 "oacs": { 00:21:37.717 "security": 0, 00:21:37.717 "format": 0, 00:21:37.717 "firmware": 0, 00:21:37.717 "ns_manage": 0 00:21:37.717 }, 00:21:37.717 "multi_ctrlr": true, 00:21:37.717 "ana_reporting": false 00:21:37.717 }, 00:21:37.717 "vs": { 00:21:37.717 "nvme_version": "1.3" 00:21:37.717 }, 00:21:37.717 "ns_data": { 00:21:37.717 "id": 1, 00:21:37.717 "can_share": true 00:21:37.717 } 00:21:37.717 } 00:21:37.717 ], 00:21:37.717 "mp_policy": "active_passive" 00:21:37.717 } 00:21:37.717 } 00:21:37.717 ] 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@53 -- # mktemp 00:21:37.717 14:58:37 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sphyvYzxu9 00:21:37.717 14:58:37 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:37.717 14:58:37 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sphyvYzxu9 00:21:37.717 14:58:37 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 [2024-04-26 14:58:37.627328] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sphyvYzxu9 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sphyvYzxu9 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 [2024-04-26 14:58:37.643314] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.717 nvme0n1 00:21:37.717 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.717 14:58:37 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.717 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.717 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.717 [ 00:21:37.717 { 00:21:37.717 "name": "nvme0n1", 00:21:37.717 "aliases": [ 00:21:37.717 "5ca1bbe0-952c-4396-ac0d-3197268991c9" 00:21:37.717 ], 00:21:37.717 "product_name": "NVMe disk", 00:21:37.717 "block_size": 512, 00:21:37.717 "num_blocks": 2097152, 00:21:37.717 "uuid": "5ca1bbe0-952c-4396-ac0d-3197268991c9", 00:21:37.717 "assigned_rate_limits": { 00:21:37.717 "rw_ios_per_sec": 0, 00:21:37.717 "rw_mbytes_per_sec": 0, 00:21:37.717 "r_mbytes_per_sec": 0, 00:21:37.717 "w_mbytes_per_sec": 0 00:21:37.717 }, 00:21:37.717 "claimed": false, 00:21:37.717 "zoned": false, 00:21:37.717 "supported_io_types": { 00:21:37.717 "read": true, 00:21:37.717 "write": true, 00:21:37.717 "unmap": false, 00:21:37.717 "write_zeroes": true, 00:21:37.717 "flush": true, 00:21:37.717 "reset": true, 00:21:37.717 "compare": true, 00:21:37.717 "compare_and_write": true, 00:21:37.717 "abort": true, 00:21:37.717 "nvme_admin": true, 00:21:37.717 "nvme_io": true 00:21:37.717 }, 00:21:37.717 "memory_domains": [ 00:21:37.717 { 00:21:37.717 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:37.717 "dma_device_type": 0 00:21:37.717 } 00:21:37.717 ], 00:21:37.717 "driver_specific": { 00:21:37.717 "nvme": [ 00:21:37.717 { 00:21:37.717 "trid": { 00:21:37.717 "trtype": "RDMA", 00:21:37.717 "adrfam": "IPv4", 00:21:37.717 "traddr": "192.168.100.8", 00:21:37.717 "trsvcid": "4421", 00:21:37.717 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.717 }, 00:21:37.717 "ctrlr_data": { 00:21:37.717 "cntlid": 3, 00:21:37.717 "vendor_id": "0x8086", 00:21:37.717 "model_number": "SPDK bdev Controller", 00:21:37.718 "serial_number": "00000000000000000000", 00:21:37.718 "firmware_revision": "24.05", 00:21:37.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.718 "oacs": { 00:21:37.718 "security": 0, 00:21:37.718 "format": 0, 00:21:37.718 "firmware": 0, 00:21:37.718 "ns_manage": 0 00:21:37.718 }, 00:21:37.718 "multi_ctrlr": true, 00:21:37.718 "ana_reporting": false 00:21:37.718 }, 00:21:37.718 "vs": { 00:21:37.718 "nvme_version": "1.3" 00:21:37.718 }, 00:21:37.718 "ns_data": { 00:21:37.718 "id": 1, 00:21:37.718 "can_share": true 00:21:37.718 } 00:21:37.718 } 00:21:37.718 ], 00:21:37.718 "mp_policy": "active_passive" 00:21:37.718 } 00:21:37.718 } 00:21:37.718 ] 00:21:37.718 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.718 14:58:37 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.718 14:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.718 14:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:37.718 14:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.718 14:58:37 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.sphyvYzxu9 00:21:37.718 14:58:37 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:37.718 14:58:37 -- host/async_init.sh@78 -- # nvmftestfini 00:21:37.718 14:58:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:37.718 14:58:37 -- nvmf/common.sh@117 -- # sync 00:21:37.718 14:58:37 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:37.718 14:58:37 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:37.718 14:58:37 -- nvmf/common.sh@120 -- # set +e 00:21:37.718 14:58:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.718 14:58:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:37.975 rmmod nvme_rdma 00:21:37.975 rmmod nvme_fabrics 00:21:37.975 14:58:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.975 14:58:37 -- nvmf/common.sh@124 -- # set -e 00:21:37.975 14:58:37 -- nvmf/common.sh@125 -- # return 0 00:21:37.975 14:58:37 -- nvmf/common.sh@478 -- # '[' -n 271184 ']' 00:21:37.975 14:58:37 -- nvmf/common.sh@479 -- # killprocess 271184 00:21:37.975 14:58:37 -- common/autotest_common.sh@936 -- # '[' -z 271184 ']' 00:21:37.975 14:58:37 -- common/autotest_common.sh@940 -- # kill -0 271184 00:21:37.975 14:58:37 -- common/autotest_common.sh@941 -- # uname 00:21:37.975 14:58:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:37.975 14:58:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 271184 00:21:37.975 14:58:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:37.975 14:58:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:37.975 14:58:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 271184' 00:21:37.975 killing process with pid 271184 00:21:37.975 14:58:37 -- common/autotest_common.sh@955 -- # kill 271184 00:21:37.975 14:58:37 -- common/autotest_common.sh@960 -- # wait 271184 00:21:37.975 [2024-04-26 14:58:38.054110] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:39.350 14:58:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:39.350 14:58:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:39.350 00:21:39.350 real 0m5.070s 00:21:39.350 user 0m3.776s 00:21:39.350 sys 0m1.916s 00:21:39.350 14:58:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.350 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:39.350 ************************************ 00:21:39.350 END TEST nvmf_async_init 00:21:39.350 ************************************ 00:21:39.350 14:58:39 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:39.350 14:58:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.350 14:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.350 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:39.350 ************************************ 00:21:39.350 START TEST dma 00:21:39.350 ************************************ 00:21:39.350 14:58:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:39.350 * Looking for test storage... 00:21:39.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:39.350 14:58:39 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.350 14:58:39 -- nvmf/common.sh@7 -- # uname -s 00:21:39.350 14:58:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.350 14:58:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.350 14:58:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.350 14:58:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.350 14:58:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.350 14:58:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.350 14:58:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.350 14:58:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.350 14:58:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.350 14:58:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.350 14:58:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:21:39.350 14:58:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:21:39.350 14:58:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.350 14:58:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.350 14:58:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.350 14:58:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.350 14:58:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:39.350 14:58:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.350 14:58:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.350 14:58:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.350 14:58:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.350 14:58:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.351 14:58:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.351 14:58:39 -- paths/export.sh@5 -- # export PATH 00:21:39.351 14:58:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.351 14:58:39 -- nvmf/common.sh@47 -- # : 0 00:21:39.351 14:58:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.351 14:58:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.351 14:58:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.351 14:58:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.351 14:58:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.351 14:58:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.351 14:58:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.351 14:58:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.351 14:58:39 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:39.351 14:58:39 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:39.351 14:58:39 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:39.351 14:58:39 -- host/dma.sh@18 -- # subsystem=0 00:21:39.351 14:58:39 -- host/dma.sh@93 -- # nvmftestinit 00:21:39.351 14:58:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:39.351 14:58:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.351 14:58:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:39.351 14:58:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:39.351 14:58:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:39.351 14:58:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.351 14:58:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.351 14:58:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.351 14:58:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:39.351 14:58:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:39.351 14:58:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.351 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:41.252 14:58:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:41.252 14:58:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.252 14:58:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.252 14:58:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.252 14:58:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.252 14:58:41 -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.252 14:58:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@296 -- # e810=() 00:21:41.252 14:58:41 -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.252 14:58:41 -- nvmf/common.sh@297 -- # x722=() 00:21:41.252 14:58:41 -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.252 14:58:41 -- nvmf/common.sh@298 -- # mlx=() 00:21:41.252 14:58:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.252 14:58:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.252 14:58:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:21:41.252 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:21:41.252 14:58:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:41.252 14:58:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:21:41.252 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:21:41.252 14:58:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:41.252 14:58:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.252 14:58:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.252 14:58:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:21:41.252 Found net devices under 0000:09:00.0: mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.252 14:58:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.252 14:58:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:21:41.252 Found net devices under 0000:09:00.1: mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.252 14:58:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:41.252 14:58:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:41.252 14:58:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:41.252 14:58:41 -- nvmf/common.sh@58 -- # uname 00:21:41.252 14:58:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:41.252 14:58:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:41.252 14:58:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:41.252 14:58:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:41.252 14:58:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:41.252 14:58:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:41.252 14:58:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:41.252 14:58:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:41.252 14:58:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:41.252 14:58:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:41.252 14:58:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:41.252 14:58:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:41.252 14:58:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:41.252 14:58:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@105 -- # continue 2 00:21:41.252 14:58:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@105 -- # continue 2 00:21:41.252 14:58:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:41.252 14:58:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.252 14:58:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:41.252 14:58:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:41.252 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:41.252 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:21:41.252 altname enp9s0f0np0 00:21:41.252 inet 192.168.100.8/24 scope global mlx_0_0 00:21:41.252 valid_lft forever preferred_lft forever 00:21:41.252 14:58:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:41.252 14:58:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.252 14:58:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:41.252 14:58:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:41.252 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:41.252 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:21:41.252 altname enp9s0f1np1 00:21:41.252 inet 192.168.100.9/24 scope global mlx_0_1 00:21:41.252 valid_lft forever preferred_lft forever 00:21:41.252 14:58:41 -- nvmf/common.sh@411 -- # return 0 00:21:41.252 14:58:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:41.252 14:58:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:41.252 14:58:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:41.252 14:58:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:41.252 14:58:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:41.252 14:58:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:41.252 14:58:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:41.252 14:58:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:41.252 14:58:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@105 -- # continue 2 00:21:41.252 14:58:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.252 14:58:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:41.252 14:58:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:41.252 14:58:41 -- nvmf/common.sh@105 -- # continue 2 00:21:41.252 14:58:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:41.252 14:58:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:41.252 14:58:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.253 14:58:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.253 14:58:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:41.253 14:58:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:41.253 14:58:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:41.253 14:58:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:41.253 14:58:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.253 14:58:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.511 14:58:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:41.511 192.168.100.9' 00:21:41.511 14:58:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:41.511 192.168.100.9' 00:21:41.511 14:58:41 -- nvmf/common.sh@446 -- # head -n 1 00:21:41.511 14:58:41 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:41.511 14:58:41 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:41.511 192.168.100.9' 00:21:41.511 14:58:41 -- nvmf/common.sh@447 -- # tail -n +2 00:21:41.511 14:58:41 -- nvmf/common.sh@447 -- # head -n 1 00:21:41.511 14:58:41 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:41.511 14:58:41 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:41.511 14:58:41 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:41.511 14:58:41 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:41.511 14:58:41 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:41.511 14:58:41 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:41.511 14:58:41 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:21:41.511 14:58:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:41.511 14:58:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:41.511 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.511 14:58:41 -- nvmf/common.sh@470 -- # nvmfpid=273265 00:21:41.511 14:58:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:41.511 14:58:41 -- nvmf/common.sh@471 -- # waitforlisten 273265 00:21:41.511 14:58:41 -- common/autotest_common.sh@817 -- # '[' -z 273265 ']' 00:21:41.511 14:58:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.511 14:58:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:41.511 14:58:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.511 14:58:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:41.511 14:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:41.511 [2024-04-26 14:58:41.438991] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:41.511 [2024-04-26 14:58:41.439145] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.511 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.511 [2024-04-26 14:58:41.559864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:41.768 [2024-04-26 14:58:41.802524] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.768 [2024-04-26 14:58:41.802603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.768 [2024-04-26 14:58:41.802636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.769 [2024-04-26 14:58:41.802660] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.769 [2024-04-26 14:58:41.802679] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.769 [2024-04-26 14:58:41.802803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.769 [2024-04-26 14:58:41.802819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.334 14:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.334 14:58:42 -- common/autotest_common.sh@850 -- # return 0 00:21:42.334 14:58:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:42.334 14:58:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.334 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.334 14:58:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.334 14:58:42 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:42.334 14:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.334 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.334 [2024-04-26 14:58:42.401201] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027c40/0x7f899d983940) succeed. 00:21:42.334 [2024-04-26 14:58:42.413328] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000027dc0/0x7f899d93f940) succeed. 00:21:42.592 14:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.592 14:58:42 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:21:42.592 14:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.592 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.850 Malloc0 00:21:42.850 14:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.850 14:58:42 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:42.850 14:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.850 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.850 14:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.850 14:58:42 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:42.850 14:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.850 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.850 14:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.850 14:58:42 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:42.850 14:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.850 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.850 [2024-04-26 14:58:42.909206] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:42.850 14:58:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.850 14:58:42 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:21:42.850 14:58:42 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:21:42.850 14:58:42 -- nvmf/common.sh@521 -- # config=() 00:21:42.850 14:58:42 -- nvmf/common.sh@521 -- # local subsystem config 00:21:42.850 14:58:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:42.850 14:58:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:42.850 { 00:21:42.850 "params": { 00:21:42.850 "name": "Nvme$subsystem", 00:21:42.850 "trtype": "$TEST_TRANSPORT", 00:21:42.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.850 "adrfam": "ipv4", 00:21:42.850 "trsvcid": "$NVMF_PORT", 00:21:42.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.850 "hdgst": ${hdgst:-false}, 00:21:42.850 "ddgst": ${ddgst:-false} 00:21:42.850 }, 00:21:42.850 "method": "bdev_nvme_attach_controller" 00:21:42.850 } 00:21:42.850 EOF 00:21:42.850 )") 00:21:42.850 14:58:42 -- nvmf/common.sh@543 -- # cat 00:21:42.850 14:58:42 -- nvmf/common.sh@545 -- # jq . 00:21:42.850 14:58:42 -- nvmf/common.sh@546 -- # IFS=, 00:21:42.850 14:58:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:42.850 "params": { 00:21:42.850 "name": "Nvme0", 00:21:42.850 "trtype": "rdma", 00:21:42.850 "traddr": "192.168.100.8", 00:21:42.850 "adrfam": "ipv4", 00:21:42.850 "trsvcid": "4420", 00:21:42.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:42.850 "hdgst": false, 00:21:42.850 "ddgst": false 00:21:42.850 }, 00:21:42.850 "method": "bdev_nvme_attach_controller" 00:21:42.850 }' 00:21:43.108 [2024-04-26 14:58:42.985453] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:43.108 [2024-04-26 14:58:42.985597] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273419 ] 00:21:43.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.108 [2024-04-26 14:58:43.107049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:43.365 [2024-04-26 14:58:43.340223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.365 [2024-04-26 14:58:43.340228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.920 bdev Nvme0n1 reports 1 memory domains 00:21:49.920 bdev Nvme0n1 supports RDMA memory domain 00:21:49.920 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:49.920 ========================================================================== 00:21:49.920 Latency [us] 00:21:49.920 IOPS MiB/s Average min max 00:21:49.920 Core 2: 14468.34 56.52 1104.67 464.01 14529.15 00:21:49.920 Core 3: 14294.00 55.84 1118.24 446.60 14489.19 00:21:49.920 ========================================================================== 00:21:49.920 Total : 28762.34 112.35 1111.41 446.60 14529.15 00:21:49.920 00:21:49.920 Total operations: 143860, translate 143860 pull_push 0 memzero 0 00:21:49.920 14:58:49 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:21:49.920 14:58:49 -- host/dma.sh@107 -- # gen_malloc_json 00:21:49.920 14:58:49 -- host/dma.sh@21 -- # jq . 00:21:49.920 [2024-04-26 14:58:49.865520] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:49.920 [2024-04-26 14:58:49.865667] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274210 ] 00:21:49.920 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.920 [2024-04-26 14:58:49.992599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:50.179 [2024-04-26 14:58:50.222953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.179 [2024-04-26 14:58:50.222957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.283 bdev Malloc0 reports 2 memory domains 00:21:58.283 bdev Malloc0 doesn't support RDMA memory domain 00:21:58.283 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:58.283 ========================================================================== 00:21:58.283 Latency [us] 00:21:58.283 IOPS MiB/s Average min max 00:21:58.283 Core 2: 10106.44 39.48 1581.92 548.17 2101.65 00:21:58.283 Core 3: 10215.39 39.90 1564.97 534.23 2050.10 00:21:58.283 ========================================================================== 00:21:58.283 Total : 20321.83 79.38 1573.40 534.23 2101.65 00:21:58.283 00:21:58.283 Total operations: 101661, translate 0 pull_push 406644 memzero 0 00:21:58.283 14:58:56 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:21:58.283 14:58:56 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:21:58.283 14:58:56 -- host/dma.sh@48 -- # local subsystem=0 00:21:58.283 14:58:56 -- host/dma.sh@50 -- # jq . 00:21:58.283 Ignoring -M option 00:21:58.283 [2024-04-26 14:58:57.072517] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:58.283 [2024-04-26 14:58:57.072652] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275120 ] 00:21:58.283 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.283 [2024-04-26 14:58:57.195217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:58.283 [2024-04-26 14:58:57.428151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.283 [2024-04-26 14:58:57.428158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.283 [2024-04-26 14:58:57.927495] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:22:03.543 [2024-04-26 14:59:02.961787] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:22:04.108 bdev d00a6b67-001b-4613-b9ac-75c39a5a56a2 reports 1 memory domains 00:22:04.108 bdev d00a6b67-001b-4613-b9ac-75c39a5a56a2 supports RDMA memory domain 00:22:04.108 Initialization complete, running randread IO for 5 sec on 2 cores 00:22:04.108 ========================================================================== 00:22:04.108 Latency [us] 00:22:04.108 IOPS MiB/s Average min max 00:22:04.108 Core 2: 53963.78 210.80 295.34 90.94 2301.98 00:22:04.108 Core 3: 56344.98 220.10 282.83 84.60 2215.87 00:22:04.108 ========================================================================== 00:22:04.108 Total : 110308.76 430.89 288.95 84.60 2301.98 00:22:04.108 00:22:04.108 Total operations: 551636, translate 0 pull_push 0 memzero 551636 00:22:04.108 14:59:04 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:22:04.108 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.108 [2024-04-26 14:59:04.163253] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:06.635 Initializing NVMe Controllers 00:22:06.635 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:22:06.635 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:06.635 Initialization complete. Launching workers. 00:22:06.635 ======================================================== 00:22:06.635 Latency(us) 00:22:06.635 Device Information : IOPS MiB/s Average min max 00:22:06.635 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2030.69 7.93 7940.30 3985.29 8005.91 00:22:06.635 ======================================================== 00:22:06.635 Total : 2030.69 7.93 7940.30 3985.29 8005.91 00:22:06.635 00:22:06.635 14:59:06 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:22:06.635 14:59:06 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:22:06.635 14:59:06 -- host/dma.sh@48 -- # local subsystem=0 00:22:06.635 14:59:06 -- host/dma.sh@50 -- # jq . 00:22:06.635 [2024-04-26 14:59:06.695463] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:06.635 [2024-04-26 14:59:06.695618] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276191 ] 00:22:06.893 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.893 [2024-04-26 14:59:06.816275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:07.150 [2024-04-26 14:59:07.041899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.150 [2024-04-26 14:59:07.041920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.715 [2024-04-26 14:59:07.527969] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:22:12.978 [2024-04-26 14:59:12.562935] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:22:13.542 bdev bd38ce9d-8e43-4014-a9d2-c5c1cfe5a686 reports 1 memory domains 00:22:13.543 bdev bd38ce9d-8e43-4014-a9d2-c5c1cfe5a686 supports RDMA memory domain 00:22:13.543 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:13.543 ========================================================================== 00:22:13.543 Latency [us] 00:22:13.543 IOPS MiB/s Average min max 00:22:13.543 Core 2: 13797.97 53.90 1158.37 33.11 7232.88 00:22:13.543 Core 3: 12235.58 47.80 1306.56 37.56 15198.25 00:22:13.543 ========================================================================== 00:22:13.543 Total : 26033.54 101.69 1228.02 33.11 15198.25 00:22:13.543 00:22:13.543 Total operations: 130202, translate 130101 pull_push 0 memzero 101 00:22:13.543 14:59:13 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:13.543 14:59:13 -- host/dma.sh@120 -- # nvmftestfini 00:22:13.543 14:59:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:13.543 14:59:13 -- nvmf/common.sh@117 -- # sync 00:22:13.543 14:59:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:13.543 14:59:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:13.543 14:59:13 -- nvmf/common.sh@120 -- # set +e 00:22:13.543 14:59:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.543 14:59:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:13.543 rmmod nvme_rdma 00:22:13.543 rmmod nvme_fabrics 00:22:13.543 14:59:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.543 14:59:13 -- nvmf/common.sh@124 -- # set -e 00:22:13.543 14:59:13 -- nvmf/common.sh@125 -- # return 0 00:22:13.543 14:59:13 -- nvmf/common.sh@478 -- # '[' -n 273265 ']' 00:22:13.543 14:59:13 -- nvmf/common.sh@479 -- # killprocess 273265 00:22:13.543 14:59:13 -- common/autotest_common.sh@936 -- # '[' -z 273265 ']' 00:22:13.543 14:59:13 -- common/autotest_common.sh@940 -- # kill -0 273265 00:22:13.543 14:59:13 -- common/autotest_common.sh@941 -- # uname 00:22:13.543 14:59:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.543 14:59:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 273265 00:22:13.543 14:59:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:13.543 14:59:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:13.543 14:59:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 273265' 00:22:13.543 killing process with pid 273265 00:22:13.543 14:59:13 -- common/autotest_common.sh@955 -- # kill 273265 00:22:13.543 14:59:13 -- common/autotest_common.sh@960 -- # wait 273265 00:22:14.119 [2024-04-26 14:59:13.922595] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:16.016 14:59:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:16.016 14:59:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:16.016 00:22:16.016 real 0m36.579s 00:22:16.016 user 1m58.949s 00:22:16.016 sys 0m3.514s 00:22:16.016 14:59:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:16.016 14:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:16.016 ************************************ 00:22:16.016 END TEST dma 00:22:16.016 ************************************ 00:22:16.016 14:59:15 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:16.016 14:59:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.016 14:59:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.016 14:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:16.016 ************************************ 00:22:16.016 START TEST nvmf_identify 00:22:16.016 ************************************ 00:22:16.016 14:59:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:16.016 * Looking for test storage... 00:22:16.016 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:16.016 14:59:16 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.016 14:59:16 -- nvmf/common.sh@7 -- # uname -s 00:22:16.016 14:59:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.016 14:59:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.016 14:59:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.016 14:59:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.016 14:59:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.016 14:59:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.016 14:59:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.016 14:59:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.016 14:59:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.016 14:59:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.016 14:59:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:16.016 14:59:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:16.016 14:59:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.016 14:59:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.016 14:59:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.016 14:59:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.016 14:59:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:16.016 14:59:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.016 14:59:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.016 14:59:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.016 14:59:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.016 14:59:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.016 14:59:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.016 14:59:16 -- paths/export.sh@5 -- # export PATH 00:22:16.017 14:59:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.017 14:59:16 -- nvmf/common.sh@47 -- # : 0 00:22:16.017 14:59:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.017 14:59:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.017 14:59:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.017 14:59:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.017 14:59:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.017 14:59:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.017 14:59:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.017 14:59:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.017 14:59:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.017 14:59:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.017 14:59:16 -- host/identify.sh@14 -- # nvmftestinit 00:22:16.017 14:59:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:16.017 14:59:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.017 14:59:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:16.017 14:59:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:16.017 14:59:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:16.017 14:59:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.017 14:59:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.017 14:59:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.017 14:59:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:16.017 14:59:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:16.017 14:59:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.017 14:59:16 -- common/autotest_common.sh@10 -- # set +x 00:22:17.915 14:59:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:17.915 14:59:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.915 14:59:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.915 14:59:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.915 14:59:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.915 14:59:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.915 14:59:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.915 14:59:17 -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.915 14:59:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.915 14:59:17 -- nvmf/common.sh@296 -- # e810=() 00:22:17.915 14:59:17 -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.915 14:59:17 -- nvmf/common.sh@297 -- # x722=() 00:22:17.915 14:59:17 -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.915 14:59:17 -- nvmf/common.sh@298 -- # mlx=() 00:22:17.915 14:59:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.915 14:59:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.915 14:59:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.915 14:59:17 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:17.915 14:59:17 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:17.915 14:59:17 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:17.915 14:59:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.915 14:59:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.915 14:59:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:17.915 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:17.915 14:59:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:17.915 14:59:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.915 14:59:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:17.915 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:17.915 14:59:17 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:17.915 14:59:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.915 14:59:17 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:17.915 14:59:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.915 14:59:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.915 14:59:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.915 14:59:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.915 14:59:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:17.915 Found net devices under 0000:09:00.0: mlx_0_0 00:22:17.915 14:59:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.916 14:59:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.916 14:59:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.916 14:59:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.916 14:59:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:17.916 Found net devices under 0000:09:00.1: mlx_0_1 00:22:17.916 14:59:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.916 14:59:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:17.916 14:59:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:17.916 14:59:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:17.916 14:59:17 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:17.916 14:59:17 -- nvmf/common.sh@58 -- # uname 00:22:17.916 14:59:17 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:17.916 14:59:17 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:17.916 14:59:17 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:17.916 14:59:17 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:17.916 14:59:17 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:17.916 14:59:17 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:17.916 14:59:17 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:17.916 14:59:17 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:17.916 14:59:17 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:17.916 14:59:17 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:17.916 14:59:17 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:17.916 14:59:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:17.916 14:59:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:17.916 14:59:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:17.916 14:59:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:17.916 14:59:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:17.916 14:59:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:17.916 14:59:17 -- nvmf/common.sh@105 -- # continue 2 00:22:17.916 14:59:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.916 14:59:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:17.916 14:59:17 -- nvmf/common.sh@105 -- # continue 2 00:22:17.916 14:59:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:17.916 14:59:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:17.916 14:59:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.916 14:59:17 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:17.916 14:59:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:17.916 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:17.916 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:22:17.916 altname enp9s0f0np0 00:22:17.916 inet 192.168.100.8/24 scope global mlx_0_0 00:22:17.916 valid_lft forever preferred_lft forever 00:22:17.916 14:59:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:17.916 14:59:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:17.916 14:59:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.916 14:59:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.916 14:59:17 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:17.916 14:59:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:17.916 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:17.916 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:22:17.916 altname enp9s0f1np1 00:22:17.916 inet 192.168.100.9/24 scope global mlx_0_1 00:22:17.916 valid_lft forever preferred_lft forever 00:22:17.916 14:59:17 -- nvmf/common.sh@411 -- # return 0 00:22:17.916 14:59:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:17.916 14:59:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:17.916 14:59:17 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:17.916 14:59:17 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:17.916 14:59:17 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:17.916 14:59:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:17.916 14:59:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:17.916 14:59:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:17.916 14:59:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:18.175 14:59:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:18.175 14:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:18.175 14:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.175 14:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:18.175 14:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:18.175 14:59:18 -- nvmf/common.sh@105 -- # continue 2 00:22:18.175 14:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:18.175 14:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.175 14:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:18.175 14:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:18.175 14:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:18.175 14:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:18.175 14:59:18 -- nvmf/common.sh@105 -- # continue 2 00:22:18.175 14:59:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:18.175 14:59:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:18.175 14:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:18.175 14:59:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:18.175 14:59:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:18.175 14:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:18.175 14:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:18.175 14:59:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:18.175 192.168.100.9' 00:22:18.175 14:59:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:18.175 192.168.100.9' 00:22:18.175 14:59:18 -- nvmf/common.sh@446 -- # head -n 1 00:22:18.175 14:59:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:18.175 14:59:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:18.175 192.168.100.9' 00:22:18.175 14:59:18 -- nvmf/common.sh@447 -- # tail -n +2 00:22:18.175 14:59:18 -- nvmf/common.sh@447 -- # head -n 1 00:22:18.175 14:59:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:18.175 14:59:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:18.175 14:59:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:18.175 14:59:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:18.175 14:59:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:18.175 14:59:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:18.175 14:59:18 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:18.175 14:59:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:18.175 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 14:59:18 -- host/identify.sh@19 -- # nvmfpid=278911 00:22:18.175 14:59:18 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.175 14:59:18 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.175 14:59:18 -- host/identify.sh@23 -- # waitforlisten 278911 00:22:18.175 14:59:18 -- common/autotest_common.sh@817 -- # '[' -z 278911 ']' 00:22:18.175 14:59:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.175 14:59:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:18.175 14:59:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.175 14:59:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:18.175 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:22:18.175 [2024-04-26 14:59:18.122342] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:18.175 [2024-04-26 14:59:18.122485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.433 [2024-04-26 14:59:18.255998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.433 [2024-04-26 14:59:18.513153] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.433 [2024-04-26 14:59:18.513239] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.433 [2024-04-26 14:59:18.513266] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.433 [2024-04-26 14:59:18.513286] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.433 [2024-04-26 14:59:18.513327] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.690 [2024-04-26 14:59:18.514158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.690 [2024-04-26 14:59:18.514260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.690 [2024-04-26 14:59:18.514328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.690 [2024-04-26 14:59:18.514330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.947 14:59:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:18.947 14:59:19 -- common/autotest_common.sh@850 -- # return 0 00:22:18.947 14:59:19 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:18.947 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.947 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.207 [2024-04-26 14:59:19.051069] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f7b16e83940) succeed. 00:22:19.207 [2024-04-26 14:59:19.061925] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f7b16e3c940) succeed. 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:19.465 14:59:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 14:59:19 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 Malloc0 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 [2024-04-26 14:59:19.486949] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.465 14:59:19 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:19.465 14:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.465 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.465 [2024-04-26 14:59:19.502605] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:19.465 [ 00:22:19.465 { 00:22:19.465 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.465 "subtype": "Discovery", 00:22:19.465 "listen_addresses": [ 00:22:19.465 { 00:22:19.465 "transport": "RDMA", 00:22:19.465 "trtype": "RDMA", 00:22:19.465 "adrfam": "IPv4", 00:22:19.465 "traddr": "192.168.100.8", 00:22:19.465 "trsvcid": "4420" 00:22:19.465 } 00:22:19.465 ], 00:22:19.465 "allow_any_host": true, 00:22:19.465 "hosts": [] 00:22:19.465 }, 00:22:19.465 { 00:22:19.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.465 "subtype": "NVMe", 00:22:19.465 "listen_addresses": [ 00:22:19.465 { 00:22:19.465 "transport": "RDMA", 00:22:19.466 "trtype": "RDMA", 00:22:19.466 "adrfam": "IPv4", 00:22:19.466 "traddr": "192.168.100.8", 00:22:19.466 "trsvcid": "4420" 00:22:19.466 } 00:22:19.466 ], 00:22:19.466 "allow_any_host": true, 00:22:19.466 "hosts": [], 00:22:19.466 "serial_number": "SPDK00000000000001", 00:22:19.466 "model_number": "SPDK bdev Controller", 00:22:19.466 "max_namespaces": 32, 00:22:19.466 "min_cntlid": 1, 00:22:19.466 "max_cntlid": 65519, 00:22:19.466 "namespaces": [ 00:22:19.466 { 00:22:19.466 "nsid": 1, 00:22:19.466 "bdev_name": "Malloc0", 00:22:19.466 "name": "Malloc0", 00:22:19.466 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:19.466 "eui64": "ABCDEF0123456789", 00:22:19.466 "uuid": "89860622-08b2-4a59-8d8d-19a4d868db61" 00:22:19.466 } 00:22:19.466 ] 00:22:19.466 } 00:22:19.466 ] 00:22:19.466 14:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.466 14:59:19 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:19.727 [2024-04-26 14:59:19.552198] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:19.727 [2024-04-26 14:59:19.552290] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279182 ] 00:22:19.727 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.727 [2024-04-26 14:59:19.625797] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:19.727 [2024-04-26 14:59:19.625941] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:19.727 [2024-04-26 14:59:19.625987] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:19.727 [2024-04-26 14:59:19.626002] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:19.727 [2024-04-26 14:59:19.626068] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:19.727 [2024-04-26 14:59:19.641730] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:19.727 [2024-04-26 14:59:19.658237] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:19.727 [2024-04-26 14:59:19.658264] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:19.727 [2024-04-26 14:59:19.658294] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2c0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658316] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2e8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658337] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf310 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658351] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf338 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658367] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf360 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658380] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf388 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658395] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3b0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658408] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3d8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658425] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf400 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658438] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf428 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658470] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf450 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658485] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf478 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658500] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4a0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658512] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4c8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658526] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4f0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658539] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf518 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658553] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf540 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658565] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf568 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658579] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf590 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658592] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5b8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658607] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5e0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658619] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf608 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658634] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf630 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658646] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf658 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658661] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658673] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658687] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658701] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658716] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658728] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658744] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658756] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:19.727 [2024-04-26 14:59:19.658772] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:19.727 [2024-04-26 14:59:19.658783] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:19.727 [2024-04-26 14:59:19.658835] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.658881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cee00 len:0x400 key:0x188600 00:22:19.727 [2024-04-26 14:59:19.666159] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.727 [2024-04-26 14:59:19.666191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:19.727 [2024-04-26 14:59:19.666217] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2c0 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.666242] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:19.727 [2024-04-26 14:59:19.666275] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:19.727 [2024-04-26 14:59:19.666297] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:19.727 [2024-04-26 14:59:19.666332] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.666354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.727 [2024-04-26 14:59:19.666405] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.727 [2024-04-26 14:59:19.666424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:19.727 [2024-04-26 14:59:19.666464] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:19.727 [2024-04-26 14:59:19.666480] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e8 length 0x10 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.666519] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:19.727 [2024-04-26 14:59:19.666543] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.727 [2024-04-26 14:59:19.666566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.727 [2024-04-26 14:59:19.666600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.727 [2024-04-26 14:59:19.666623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.666640] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:19.728 [2024-04-26 14:59:19.666657] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf310 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.666679] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.666702] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.666721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.666753] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.666769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.666788] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.666802] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf338 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.666826] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.666852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.666883] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.666899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.666917] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:19.728 [2024-04-26 14:59:19.666932] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.666950] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf360 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.666972] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.667092] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:19.728 [2024-04-26 14:59:19.667106] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.667155] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.667215] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.667232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.667250] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:19.728 [2024-04-26 14:59:19.667269] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf388 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667304] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.667362] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.667378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.667395] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:19.728 [2024-04-26 14:59:19.667413] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.667433] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3b0 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667465] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:19.728 [2024-04-26 14:59:19.667490] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.667522] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:22:19.728 [2024-04-26 14:59:19.667629] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.667652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.667677] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:19.728 [2024-04-26 14:59:19.667696] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:19.728 [2024-04-26 14:59:19.667712] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:19.728 [2024-04-26 14:59:19.667731] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:19.728 [2024-04-26 14:59:19.667745] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:19.728 [2024-04-26 14:59:19.667761] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.667777] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3d8 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667799] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.667822] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.667875] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.667895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.667917] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0240 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.728 [2024-04-26 14:59:19.667959] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.667977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.728 [2024-04-26 14:59:19.667993] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.728 [2024-04-26 14:59:19.668027] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.728 [2024-04-26 14:59:19.668063] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.668080] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf400 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668103] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:19.728 [2024-04-26 14:59:19.668125] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.728 [2024-04-26 14:59:19.668202] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.668218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.668241] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:19.728 [2024-04-26 14:59:19.668256] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:19.728 [2024-04-26 14:59:19.668272] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf428 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668302] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:22:19.728 [2024-04-26 14:59:19.668377] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.668403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.668426] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf450 length 0x10 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668470] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:19.728 [2024-04-26 14:59:19.668537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x188600 00:22:19.728 [2024-04-26 14:59:19.668585] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.728 [2024-04-26 14:59:19.668672] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.728 [2024-04-26 14:59:19.668696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:19.728 [2024-04-26 14:59:19.668745] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:22:19.728 [2024-04-26 14:59:19.668774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x188600 00:22:19.728 [2024-04-26 14:59:19.668789] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf478 length 0x10 lkey 0x188600 00:22:19.729 [2024-04-26 14:59:19.668807] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.729 [2024-04-26 14:59:19.668821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:19.729 [2024-04-26 14:59:19.668837] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4a0 length 0x10 lkey 0x188600 00:22:19.729 [2024-04-26 14:59:19.668851] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.729 [2024-04-26 14:59:19.668867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:19.729 [2024-04-26 14:59:19.668894] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.729 [2024-04-26 14:59:19.668928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x188600 00:22:19.729 [2024-04-26 14:59:19.668944] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4c8 length 0x10 lkey 0x188600 00:22:19.729 [2024-04-26 14:59:19.668987] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.729 [2024-04-26 14:59:19.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:19.729 [2024-04-26 14:59:19.669031] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4f0 length 0x10 lkey 0x188600 00:22:19.729 ===================================================== 00:22:19.729 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:19.729 ===================================================== 00:22:19.729 Controller Capabilities/Features 00:22:19.729 ================================ 00:22:19.729 Vendor ID: 0000 00:22:19.729 Subsystem Vendor ID: 0000 00:22:19.729 Serial Number: .................... 00:22:19.729 Model Number: ........................................ 00:22:19.729 Firmware Version: 24.05 00:22:19.729 Recommended Arb Burst: 0 00:22:19.729 IEEE OUI Identifier: 00 00 00 00:22:19.729 Multi-path I/O 00:22:19.729 May have multiple subsystem ports: No 00:22:19.729 May have multiple controllers: No 00:22:19.729 Associated with SR-IOV VF: No 00:22:19.729 Max Data Transfer Size: 131072 00:22:19.729 Max Number of Namespaces: 0 00:22:19.729 Max Number of I/O Queues: 1024 00:22:19.729 NVMe Specification Version (VS): 1.3 00:22:19.729 NVMe Specification Version (Identify): 1.3 00:22:19.729 Maximum Queue Entries: 128 00:22:19.729 Contiguous Queues Required: Yes 00:22:19.729 Arbitration Mechanisms Supported 00:22:19.729 Weighted Round Robin: Not Supported 00:22:19.729 Vendor Specific: Not Supported 00:22:19.729 Reset Timeout: 15000 ms 00:22:19.729 Doorbell Stride: 4 bytes 00:22:19.729 NVM Subsystem Reset: Not Supported 00:22:19.729 Command Sets Supported 00:22:19.729 NVM Command Set: Supported 00:22:19.729 Boot Partition: Not Supported 00:22:19.729 Memory Page Size Minimum: 4096 bytes 00:22:19.729 Memory Page Size Maximum: 4096 bytes 00:22:19.729 Persistent Memory Region: Not Supported 00:22:19.729 Optional Asynchronous Events Supported 00:22:19.729 Namespace Attribute Notices: Not Supported 00:22:19.729 Firmware Activation Notices: Not Supported 00:22:19.729 ANA Change Notices: Not Supported 00:22:19.729 PLE Aggregate Log Change Notices: Not Supported 00:22:19.729 LBA Status Info Alert Notices: Not Supported 00:22:19.729 EGE Aggregate Log Change Notices: Not Supported 00:22:19.729 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.729 Zone Descriptor Change Notices: Not Supported 00:22:19.729 Discovery Log Change Notices: Supported 00:22:19.729 Controller Attributes 00:22:19.729 128-bit Host Identifier: Not Supported 00:22:19.729 Non-Operational Permissive Mode: Not Supported 00:22:19.729 NVM Sets: Not Supported 00:22:19.729 Read Recovery Levels: Not Supported 00:22:19.729 Endurance Groups: Not Supported 00:22:19.729 Predictable Latency Mode: Not Supported 00:22:19.729 Traffic Based Keep ALive: Not Supported 00:22:19.729 Namespace Granularity: Not Supported 00:22:19.729 SQ Associations: Not Supported 00:22:19.729 UUID List: Not Supported 00:22:19.729 Multi-Domain Subsystem: Not Supported 00:22:19.729 Fixed Capacity Management: Not Supported 00:22:19.729 Variable Capacity Management: Not Supported 00:22:19.729 Delete Endurance Group: Not Supported 00:22:19.729 Delete NVM Set: Not Supported 00:22:19.729 Extended LBA Formats Supported: Not Supported 00:22:19.729 Flexible Data Placement Supported: Not Supported 00:22:19.729 00:22:19.729 Controller Memory Buffer Support 00:22:19.729 ================================ 00:22:19.729 Supported: No 00:22:19.729 00:22:19.729 Persistent Memory Region Support 00:22:19.729 ================================ 00:22:19.729 Supported: No 00:22:19.729 00:22:19.729 Admin Command Set Attributes 00:22:19.729 ============================ 00:22:19.729 Security Send/Receive: Not Supported 00:22:19.729 Format NVM: Not Supported 00:22:19.729 Firmware Activate/Download: Not Supported 00:22:19.729 Namespace Management: Not Supported 00:22:19.729 Device Self-Test: Not Supported 00:22:19.729 Directives: Not Supported 00:22:19.729 NVMe-MI: Not Supported 00:22:19.729 Virtualization Management: Not Supported 00:22:19.729 Doorbell Buffer Config: Not Supported 00:22:19.729 Get LBA Status Capability: Not Supported 00:22:19.729 Command & Feature Lockdown Capability: Not Supported 00:22:19.729 Abort Command Limit: 1 00:22:19.729 Async Event Request Limit: 4 00:22:19.729 Number of Firmware Slots: N/A 00:22:19.729 Firmware Slot 1 Read-Only: N/A 00:22:19.729 Firmware Activation Without Reset: N/A 00:22:19.729 Multiple Update Detection Support: N/A 00:22:19.729 Firmware Update Granularity: No Information Provided 00:22:19.729 Per-Namespace SMART Log: No 00:22:19.729 Asymmetric Namespace Access Log Page: Not Supported 00:22:19.729 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:19.729 Command Effects Log Page: Not Supported 00:22:19.729 Get Log Page Extended Data: Supported 00:22:19.729 Telemetry Log Pages: Not Supported 00:22:19.729 Persistent Event Log Pages: Not Supported 00:22:19.729 Supported Log Pages Log Page: May Support 00:22:19.729 Commands Supported & Effects Log Page: Not Supported 00:22:19.729 Feature Identifiers & Effects Log Page:May Support 00:22:19.729 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.729 Data Area 4 for Telemetry Log: Not Supported 00:22:19.729 Error Log Page Entries Supported: 128 00:22:19.729 Keep Alive: Not Supported 00:22:19.729 00:22:19.729 NVM Command Set Attributes 00:22:19.729 ========================== 00:22:19.729 Submission Queue Entry Size 00:22:19.729 Max: 1 00:22:19.729 Min: 1 00:22:19.729 Completion Queue Entry Size 00:22:19.729 Max: 1 00:22:19.729 Min: 1 00:22:19.729 Number of Namespaces: 0 00:22:19.729 Compare Command: Not Supported 00:22:19.729 Write Uncorrectable Command: Not Supported 00:22:19.729 Dataset Management Command: Not Supported 00:22:19.729 Write Zeroes Command: Not Supported 00:22:19.729 Set Features Save Field: Not Supported 00:22:19.729 Reservations: Not Supported 00:22:19.729 Timestamp: Not Supported 00:22:19.729 Copy: Not Supported 00:22:19.729 Volatile Write Cache: Not Present 00:22:19.729 Atomic Write Unit (Normal): 1 00:22:19.729 Atomic Write Unit (PFail): 1 00:22:19.729 Atomic Compare & Write Unit: 1 00:22:19.729 Fused Compare & Write: Supported 00:22:19.729 Scatter-Gather List 00:22:19.729 SGL Command Set: Supported 00:22:19.729 SGL Keyed: Supported 00:22:19.729 SGL Bit Bucket Descriptor: Not Supported 00:22:19.729 SGL Metadata Pointer: Not Supported 00:22:19.729 Oversized SGL: Not Supported 00:22:19.729 SGL Metadata Address: Not Supported 00:22:19.729 SGL Offset: Supported 00:22:19.729 Transport SGL Data Block: Not Supported 00:22:19.729 Replay Protected Memory Block: Not Supported 00:22:19.729 00:22:19.729 Firmware Slot Information 00:22:19.729 ========================= 00:22:19.729 Active slot: 0 00:22:19.729 00:22:19.729 00:22:19.729 Error Log 00:22:19.729 ========= 00:22:19.729 00:22:19.729 Active Namespaces 00:22:19.729 ================= 00:22:19.729 Discovery Log Page 00:22:19.729 ================== 00:22:19.729 Generation Counter: 2 00:22:19.729 Number of Records: 2 00:22:19.729 Record Format: 0 00:22:19.729 00:22:19.729 Discovery Log Entry 0 00:22:19.729 ---------------------- 00:22:19.729 Transport Type: 1 (RDMA) 00:22:19.729 Address Family: 1 (IPv4) 00:22:19.729 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:19.729 Entry Flags: 00:22:19.729 Duplicate Returned Information: 1 00:22:19.729 Explicit Persistent Connection Support for Discovery: 1 00:22:19.729 Transport Requirements: 00:22:19.729 Secure Channel: Not Required 00:22:19.729 Port ID: 0 (0x0000) 00:22:19.729 Controller ID: 65535 (0xffff) 00:22:19.729 Admin Max SQ Size: 128 00:22:19.729 Transport Service Identifier: 4420 00:22:19.730 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:19.730 Transport Address: 192.168.100.8 00:22:19.730 Transport Specific Address Subtype - RDMA 00:22:19.730 RDMA QP Service Type: 1 (Reliable Connected) 00:22:19.730 RDMA Provider Type: 1 (No provider specified) 00:22:19.730 RDMA CM Service: 1 (RDMA_CM) 00:22:19.730 Discovery Log Entry 1 00:22:19.730 ---------------------- 00:22:19.730 Transport Type: 1 (RDMA) 00:22:19.730 Address Family: 1 (IPv4) 00:22:19.730 Subsystem Type: 2 (NVM Subsystem) 00:22:19.730 Entry Flags: 00:22:19.730 Duplicate Returned Information: 0 00:22:19.730 Explicit Persistent Connection Support for Discovery: 0 00:22:19.730 Transport Requirements: 00:22:19.730 Secure Channel: Not Required 00:22:19.730 Port ID: 0 (0x0000) 00:22:19.730 Controller ID: 65535 (0xffff) 00:22:19.730 Admin Max SQ Size: [2024-04-26 14:59:19.669225] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:19.730 [2024-04-26 14:59:19.669268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669351] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.669408] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.669426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669467] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.669511] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf518 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669533] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.669552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669567] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:19.730 [2024-04-26 14:59:19.669594] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:19.730 [2024-04-26 14:59:19.669611] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf540 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669638] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.669684] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.669699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf568 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669739] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.669792] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.669810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669828] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf590 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669855] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.669904] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.669920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.669937] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5b8 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669959] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.669983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.670004] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.670021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.670036] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5e0 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.670064] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.670083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.674136] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.674161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.674182] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf608 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.674207] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.674232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.730 [2024-04-26 14:59:19.674266] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.730 [2024-04-26 14:59:19.674287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:22:19.730 [2024-04-26 14:59:19.674302] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf630 length 0x10 lkey 0x188600 00:22:19.730 [2024-04-26 14:59:19.674323] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:22:19.730 128 00:22:19.730 Transport Service Identifier: 4420 00:22:19.730 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:19.730 Transport Address: 192.168.100.8 00:22:19.730 Transport Specific Address Subtype - RDMA 00:22:19.730 RDMA QP Service Type: 1 (Reliable Connected) 00:22:19.730 RDMA Provider Type: 1 (No provider specified) 00:22:19.730 RDMA CM Service: 1 (RDMA_CM) 00:22:19.730 14:59:19 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:19.992 [2024-04-26 14:59:19.846056] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:19.992 [2024-04-26 14:59:19.846190] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279191 ] 00:22:19.992 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.992 [2024-04-26 14:59:19.930608] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:19.992 [2024-04-26 14:59:19.930753] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:19.993 [2024-04-26 14:59:19.930821] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:19.993 [2024-04-26 14:59:19.930852] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:19.993 [2024-04-26 14:59:19.930932] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:19.993 [2024-04-26 14:59:19.946733] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:19.993 [2024-04-26 14:59:19.963092] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:19.993 [2024-04-26 14:59:19.967146] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:19.993 [2024-04-26 14:59:19.967186] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2c0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967206] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2e8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967222] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf310 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967235] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf338 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967252] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf360 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967281] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf388 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967296] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3b0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967310] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3d8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967327] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf400 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967340] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf428 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967355] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf450 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967371] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf478 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967387] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4a0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967400] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4c8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967415] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4f0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967442] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf518 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967458] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf540 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967471] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf568 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967486] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf590 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967514] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5b8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967531] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5e0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967551] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf608 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967567] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf630 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967580] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf658 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967595] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967608] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967623] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967639] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967656] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967670] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967685] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967699] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:19.993 [2024-04-26 14:59:19.967715] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:19.993 [2024-04-26 14:59:19.967726] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:19.993 [2024-04-26 14:59:19.967779] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.967839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cee00 len:0x400 key:0x188600 00:22:19.993 [2024-04-26 14:59:19.975152] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.975195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.975219] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2c0 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975255] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:19.993 [2024-04-26 14:59:19.975294] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:19.993 [2024-04-26 14:59:19.975316] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:19.993 [2024-04-26 14:59:19.975351] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.975419] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.975453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.975474] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:19.993 [2024-04-26 14:59:19.975506] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e8 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:19.993 [2024-04-26 14:59:19.975549] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.975598] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.975620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.975637] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:19.993 [2024-04-26 14:59:19.975654] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf310 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975676] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.975701] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.975754] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.975770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.975789] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.975803] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf338 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975828] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.975877] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.975892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.975912] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:19.993 [2024-04-26 14:59:19.975926] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.975945] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf360 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.975967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.976088] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:19.993 [2024-04-26 14:59:19.976101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.976155] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.976178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.976214] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.976231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.976252] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:19.993 [2024-04-26 14:59:19.976267] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf388 length 0x10 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.976304] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.993 [2024-04-26 14:59:19.976337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.993 [2024-04-26 14:59:19.976367] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.993 [2024-04-26 14:59:19.976384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:19.993 [2024-04-26 14:59:19.976401] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:19.994 [2024-04-26 14:59:19.976437] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.976469] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3b0 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.976503] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:19.994 [2024-04-26 14:59:19.976524] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.976555] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.976580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:22:19.994 [2024-04-26 14:59:19.976666] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.976687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.976712] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:19.994 [2024-04-26 14:59:19.976730] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:19.994 [2024-04-26 14:59:19.976743] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:19.994 [2024-04-26 14:59:19.976766] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:19.994 [2024-04-26 14:59:19.976779] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:19.994 [2024-04-26 14:59:19.976795] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.976811] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3d8 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.976835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.976858] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.976884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.994 [2024-04-26 14:59:19.976916] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.976935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.976957] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0240 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.976979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.994 [2024-04-26 14:59:19.976996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.994 [2024-04-26 14:59:19.977036] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.994 [2024-04-26 14:59:19.977071] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.994 [2024-04-26 14:59:19.977122] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977154] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf400 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977191] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977215] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.994 [2024-04-26 14:59:19.977286] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.977304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.977326] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:19.994 [2024-04-26 14:59:19.977343] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977359] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf428 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977435] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.994 [2024-04-26 14:59:19.977498] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.977519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.977602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977623] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf450 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977653] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977686] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x188600 00:22:19.994 [2024-04-26 14:59:19.977765] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.977785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.977832] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:19.994 [2024-04-26 14:59:19.977864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977882] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf478 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977912] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.977949] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.977970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:22:19.994 [2024-04-26 14:59:19.978038] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.978054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.978096] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978135] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4a0 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978164] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978208] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188600 00:22:19.994 [2024-04-26 14:59:19.978288] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.978311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.978344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978368] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4c8 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978388] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978479] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:19.994 [2024-04-26 14:59:19.978495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:19.994 [2024-04-26 14:59:19.978508] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:19.994 [2024-04-26 14:59:19.978554] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.994 [2024-04-26 14:59:19.978607] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.994 [2024-04-26 14:59:19.978652] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.994 [2024-04-26 14:59:19.978680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:19.994 [2024-04-26 14:59:19.978704] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4f0 length 0x10 lkey 0x188600 00:22:19.994 [2024-04-26 14:59:19.978721] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.978738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.978752] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf518 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.978777] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.978812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.995 [2024-04-26 14:59:19.978844] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.978860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.978877] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf540 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.978898] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.978921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.995 [2024-04-26 14:59:19.978952] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.978971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.978985] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf568 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.979011] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.979034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.995 [2024-04-26 14:59:19.979067] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.979083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.979100] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf590 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983155] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x188600 00:22:19.995 [2024-04-26 14:59:19.983231] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0100 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x188600 00:22:19.995 [2024-04-26 14:59:19.983290] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x188600 00:22:19.995 [2024-04-26 14:59:19.983351] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x188600 00:22:19.995 [2024-04-26 14:59:19.983408] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.983477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5b8 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983513] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.983528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.983556] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5e0 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983593] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.983610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.983627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf608 length 0x10 lkey 0x188600 00:22:19.995 [2024-04-26 14:59:19.983645] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.995 [2024-04-26 14:59:19.983659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:19.995 [2024-04-26 14:59:19.983686] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf630 length 0x10 lkey 0x188600 00:22:19.995 ===================================================== 00:22:19.995 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.995 ===================================================== 00:22:19.995 Controller Capabilities/Features 00:22:19.995 ================================ 00:22:19.995 Vendor ID: 8086 00:22:19.995 Subsystem Vendor ID: 8086 00:22:19.995 Serial Number: SPDK00000000000001 00:22:19.995 Model Number: SPDK bdev Controller 00:22:19.995 Firmware Version: 24.05 00:22:19.995 Recommended Arb Burst: 6 00:22:19.995 IEEE OUI Identifier: e4 d2 5c 00:22:19.995 Multi-path I/O 00:22:19.995 May have multiple subsystem ports: Yes 00:22:19.995 May have multiple controllers: Yes 00:22:19.995 Associated with SR-IOV VF: No 00:22:19.995 Max Data Transfer Size: 131072 00:22:19.995 Max Number of Namespaces: 32 00:22:19.995 Max Number of I/O Queues: 127 00:22:19.995 NVMe Specification Version (VS): 1.3 00:22:19.995 NVMe Specification Version (Identify): 1.3 00:22:19.995 Maximum Queue Entries: 128 00:22:19.995 Contiguous Queues Required: Yes 00:22:19.995 Arbitration Mechanisms Supported 00:22:19.995 Weighted Round Robin: Not Supported 00:22:19.995 Vendor Specific: Not Supported 00:22:19.995 Reset Timeout: 15000 ms 00:22:19.995 Doorbell Stride: 4 bytes 00:22:19.995 NVM Subsystem Reset: Not Supported 00:22:19.995 Command Sets Supported 00:22:19.995 NVM Command Set: Supported 00:22:19.995 Boot Partition: Not Supported 00:22:19.995 Memory Page Size Minimum: 4096 bytes 00:22:19.995 Memory Page Size Maximum: 4096 bytes 00:22:19.995 Persistent Memory Region: Not Supported 00:22:19.995 Optional Asynchronous Events Supported 00:22:19.995 Namespace Attribute Notices: Supported 00:22:19.995 Firmware Activation Notices: Not Supported 00:22:19.995 ANA Change Notices: Not Supported 00:22:19.995 PLE Aggregate Log Change Notices: Not Supported 00:22:19.995 LBA Status Info Alert Notices: Not Supported 00:22:19.995 EGE Aggregate Log Change Notices: Not Supported 00:22:19.995 Normal NVM Subsystem Shutdown event: Not Supported 00:22:19.995 Zone Descriptor Change Notices: Not Supported 00:22:19.995 Discovery Log Change Notices: Not Supported 00:22:19.995 Controller Attributes 00:22:19.995 128-bit Host Identifier: Supported 00:22:19.995 Non-Operational Permissive Mode: Not Supported 00:22:19.995 NVM Sets: Not Supported 00:22:19.995 Read Recovery Levels: Not Supported 00:22:19.995 Endurance Groups: Not Supported 00:22:19.995 Predictable Latency Mode: Not Supported 00:22:19.995 Traffic Based Keep ALive: Not Supported 00:22:19.995 Namespace Granularity: Not Supported 00:22:19.995 SQ Associations: Not Supported 00:22:19.995 UUID List: Not Supported 00:22:19.995 Multi-Domain Subsystem: Not Supported 00:22:19.995 Fixed Capacity Management: Not Supported 00:22:19.995 Variable Capacity Management: Not Supported 00:22:19.995 Delete Endurance Group: Not Supported 00:22:19.995 Delete NVM Set: Not Supported 00:22:19.995 Extended LBA Formats Supported: Not Supported 00:22:19.995 Flexible Data Placement Supported: Not Supported 00:22:19.995 00:22:19.995 Controller Memory Buffer Support 00:22:19.995 ================================ 00:22:19.995 Supported: No 00:22:19.995 00:22:19.995 Persistent Memory Region Support 00:22:19.995 ================================ 00:22:19.995 Supported: No 00:22:19.995 00:22:19.995 Admin Command Set Attributes 00:22:19.995 ============================ 00:22:19.995 Security Send/Receive: Not Supported 00:22:19.995 Format NVM: Not Supported 00:22:19.995 Firmware Activate/Download: Not Supported 00:22:19.995 Namespace Management: Not Supported 00:22:19.995 Device Self-Test: Not Supported 00:22:19.995 Directives: Not Supported 00:22:19.995 NVMe-MI: Not Supported 00:22:19.995 Virtualization Management: Not Supported 00:22:19.995 Doorbell Buffer Config: Not Supported 00:22:19.995 Get LBA Status Capability: Not Supported 00:22:19.995 Command & Feature Lockdown Capability: Not Supported 00:22:19.995 Abort Command Limit: 4 00:22:19.995 Async Event Request Limit: 4 00:22:19.995 Number of Firmware Slots: N/A 00:22:19.995 Firmware Slot 1 Read-Only: N/A 00:22:19.995 Firmware Activation Without Reset: N/A 00:22:19.995 Multiple Update Detection Support: N/A 00:22:19.995 Firmware Update Granularity: No Information Provided 00:22:19.995 Per-Namespace SMART Log: No 00:22:19.995 Asymmetric Namespace Access Log Page: Not Supported 00:22:19.995 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:19.995 Command Effects Log Page: Supported 00:22:19.995 Get Log Page Extended Data: Supported 00:22:19.995 Telemetry Log Pages: Not Supported 00:22:19.995 Persistent Event Log Pages: Not Supported 00:22:19.995 Supported Log Pages Log Page: May Support 00:22:19.995 Commands Supported & Effects Log Page: Not Supported 00:22:19.995 Feature Identifiers & Effects Log Page:May Support 00:22:19.995 NVMe-MI Commands & Effects Log Page: May Support 00:22:19.995 Data Area 4 for Telemetry Log: Not Supported 00:22:19.995 Error Log Page Entries Supported: 128 00:22:19.995 Keep Alive: Supported 00:22:19.996 Keep Alive Granularity: 10000 ms 00:22:19.996 00:22:19.996 NVM Command Set Attributes 00:22:19.996 ========================== 00:22:19.996 Submission Queue Entry Size 00:22:19.996 Max: 64 00:22:19.996 Min: 64 00:22:19.996 Completion Queue Entry Size 00:22:19.996 Max: 16 00:22:19.996 Min: 16 00:22:19.996 Number of Namespaces: 32 00:22:19.996 Compare Command: Supported 00:22:19.996 Write Uncorrectable Command: Not Supported 00:22:19.996 Dataset Management Command: Supported 00:22:19.996 Write Zeroes Command: Supported 00:22:19.996 Set Features Save Field: Not Supported 00:22:19.996 Reservations: Supported 00:22:19.996 Timestamp: Not Supported 00:22:19.996 Copy: Supported 00:22:19.996 Volatile Write Cache: Present 00:22:19.996 Atomic Write Unit (Normal): 1 00:22:19.996 Atomic Write Unit (PFail): 1 00:22:19.996 Atomic Compare & Write Unit: 1 00:22:19.996 Fused Compare & Write: Supported 00:22:19.996 Scatter-Gather List 00:22:19.996 SGL Command Set: Supported 00:22:19.996 SGL Keyed: Supported 00:22:19.996 SGL Bit Bucket Descriptor: Not Supported 00:22:19.996 SGL Metadata Pointer: Not Supported 00:22:19.996 Oversized SGL: Not Supported 00:22:19.996 SGL Metadata Address: Not Supported 00:22:19.996 SGL Offset: Supported 00:22:19.996 Transport SGL Data Block: Not Supported 00:22:19.996 Replay Protected Memory Block: Not Supported 00:22:19.996 00:22:19.996 Firmware Slot Information 00:22:19.996 ========================= 00:22:19.996 Active slot: 1 00:22:19.996 Slot 1 Firmware Revision: 24.05 00:22:19.996 00:22:19.996 00:22:19.996 Commands Supported and Effects 00:22:19.996 ============================== 00:22:19.996 Admin Commands 00:22:19.996 -------------- 00:22:19.996 Get Log Page (02h): Supported 00:22:19.996 Identify (06h): Supported 00:22:19.996 Abort (08h): Supported 00:22:19.996 Set Features (09h): Supported 00:22:19.996 Get Features (0Ah): Supported 00:22:19.996 Asynchronous Event Request (0Ch): Supported 00:22:19.996 Keep Alive (18h): Supported 00:22:19.996 I/O Commands 00:22:19.996 ------------ 00:22:19.996 Flush (00h): Supported LBA-Change 00:22:19.996 Write (01h): Supported LBA-Change 00:22:19.996 Read (02h): Supported 00:22:19.996 Compare (05h): Supported 00:22:19.996 Write Zeroes (08h): Supported LBA-Change 00:22:19.996 Dataset Management (09h): Supported LBA-Change 00:22:19.996 Copy (19h): Supported LBA-Change 00:22:19.996 Unknown (79h): Supported LBA-Change 00:22:19.996 Unknown (7Ah): Supported 00:22:19.996 00:22:19.996 Error Log 00:22:19.996 ========= 00:22:19.996 00:22:19.996 Arbitration 00:22:19.996 =========== 00:22:19.996 Arbitration Burst: 1 00:22:19.996 00:22:19.996 Power Management 00:22:19.996 ================ 00:22:19.996 Number of Power States: 1 00:22:19.996 Current Power State: Power State #0 00:22:19.996 Power State #0: 00:22:19.996 Max Power: 0.00 W 00:22:19.996 Non-Operational State: Operational 00:22:19.996 Entry Latency: Not Reported 00:22:19.996 Exit Latency: Not Reported 00:22:19.996 Relative Read Throughput: 0 00:22:19.996 Relative Read Latency: 0 00:22:19.996 Relative Write Throughput: 0 00:22:19.996 Relative Write Latency: 0 00:22:19.996 Idle Power: Not Reported 00:22:19.996 Active Power: Not Reported 00:22:19.996 Non-Operational Permissive Mode: Not Supported 00:22:19.996 00:22:19.996 Health Information 00:22:19.996 ================== 00:22:19.996 Critical Warnings: 00:22:19.996 Available Spare Space: OK 00:22:19.996 Temperature: OK 00:22:19.996 Device Reliability: OK 00:22:19.996 Read Only: No 00:22:19.996 Volatile Memory Backup: OK 00:22:19.996 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:19.996 Temperature Threshold: [2024-04-26 14:59:19.983874] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.983902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.983933] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.983955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.983973] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf658 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984048] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:19.996 [2024-04-26 14:59:19.984080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5960 sqhd:0000 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5960 sqhd:0000 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5960 sqhd:0000 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:5960 sqhd:0000 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984211] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984264] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984316] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984362] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984396] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984447] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:19.996 [2024-04-26 14:59:19.984461] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:19.996 [2024-04-26 14:59:19.984499] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984521] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984575] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984612] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984638] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984693] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984727] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984750] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984797] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984831] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984856] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.996 [2024-04-26 14:59:19.984875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.996 [2024-04-26 14:59:19.984909] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.996 [2024-04-26 14:59:19.984925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:19.996 [2024-04-26 14:59:19.984944] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.984965] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.984990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985012] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985044] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985073] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985150] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985187] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2c0 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985210] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985272] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985308] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2e8 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985338] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985444] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf310 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985466] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985513] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985545] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf338 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985570] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985622] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985655] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf360 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985681] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985731] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985764] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf388 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985789] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985837] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985869] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3b0 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985892] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.985914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.985943] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.985962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.985976] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3d8 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986014] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.986060] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.986076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.986093] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf400 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986137] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.986192] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.986211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.986226] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf428 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986253] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.986312] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.986329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.986352] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf450 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986375] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.986440] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.986460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.986475] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf478 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986500] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.997 [2024-04-26 14:59:19.986549] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.997 [2024-04-26 14:59:19.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:19.997 [2024-04-26 14:59:19.986582] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4a0 length 0x10 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986604] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.997 [2024-04-26 14:59:19.986626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.986650] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.986669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.986683] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4c8 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986709] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.986757] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.986772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.986791] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4f0 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986814] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.986856] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.986874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.986888] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf518 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986917] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.986936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.986967] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.986983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.987002] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf540 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.987025] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.987047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.987073] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.987091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.987121] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf568 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.991176] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.991200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:19.998 [2024-04-26 14:59:19.991244] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:19.998 [2024-04-26 14:59:19.991261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0012 p:0 m:0 dnr:0 00:22:19.998 [2024-04-26 14:59:19.991282] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf590 length 0x10 lkey 0x188600 00:22:19.998 [2024-04-26 14:59:19.991300] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:20.256 0 Kelvin (-273 Celsius) 00:22:20.256 Available Spare: 0% 00:22:20.256 Available Spare Threshold: 0% 00:22:20.256 Life Percentage Used: 0% 00:22:20.256 Data Units Read: 0 00:22:20.256 Data Units Written: 0 00:22:20.256 Host Read Commands: 0 00:22:20.256 Host Write Commands: 0 00:22:20.256 Controller Busy Time: 0 minutes 00:22:20.256 Power Cycles: 0 00:22:20.256 Power On Hours: 0 hours 00:22:20.256 Unsafe Shutdowns: 0 00:22:20.256 Unrecoverable Media Errors: 0 00:22:20.256 Lifetime Error Log Entries: 0 00:22:20.256 Warning Temperature Time: 0 minutes 00:22:20.256 Critical Temperature Time: 0 minutes 00:22:20.256 00:22:20.256 Number of Queues 00:22:20.256 ================ 00:22:20.256 Number of I/O Submission Queues: 127 00:22:20.256 Number of I/O Completion Queues: 127 00:22:20.256 00:22:20.256 Active Namespaces 00:22:20.256 ================= 00:22:20.256 Namespace ID:1 00:22:20.256 Error Recovery Timeout: Unlimited 00:22:20.256 Command Set Identifier: NVM (00h) 00:22:20.256 Deallocate: Supported 00:22:20.256 Deallocated/Unwritten Error: Not Supported 00:22:20.256 Deallocated Read Value: Unknown 00:22:20.256 Deallocate in Write Zeroes: Not Supported 00:22:20.256 Deallocated Guard Field: 0xFFFF 00:22:20.256 Flush: Supported 00:22:20.256 Reservation: Supported 00:22:20.256 Namespace Sharing Capabilities: Multiple Controllers 00:22:20.256 Size (in LBAs): 131072 (0GiB) 00:22:20.256 Capacity (in LBAs): 131072 (0GiB) 00:22:20.256 Utilization (in LBAs): 131072 (0GiB) 00:22:20.256 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:20.256 EUI64: ABCDEF0123456789 00:22:20.256 UUID: 89860622-08b2-4a59-8d8d-19a4d868db61 00:22:20.256 Thin Provisioning: Not Supported 00:22:20.256 Per-NS Atomic Units: Yes 00:22:20.256 Atomic Boundary Size (Normal): 0 00:22:20.256 Atomic Boundary Size (PFail): 0 00:22:20.256 Atomic Boundary Offset: 0 00:22:20.256 Maximum Single Source Range Length: 65535 00:22:20.256 Maximum Copy Length: 65535 00:22:20.256 Maximum Source Range Count: 1 00:22:20.256 NGUID/EUI64 Never Reused: No 00:22:20.256 Namespace Write Protected: No 00:22:20.256 Number of LBA Formats: 1 00:22:20.256 Current LBA Format: LBA Format #00 00:22:20.256 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:20.256 00:22:20.256 14:59:20 -- host/identify.sh@51 -- # sync 00:22:20.256 14:59:20 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.256 14:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.256 14:59:20 -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 14:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.256 14:59:20 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:20.256 14:59:20 -- host/identify.sh@56 -- # nvmftestfini 00:22:20.256 14:59:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:20.256 14:59:20 -- nvmf/common.sh@117 -- # sync 00:22:20.256 14:59:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:20.256 14:59:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:20.256 14:59:20 -- nvmf/common.sh@120 -- # set +e 00:22:20.256 14:59:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.256 14:59:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:20.256 rmmod nvme_rdma 00:22:20.256 rmmod nvme_fabrics 00:22:20.256 14:59:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.256 14:59:20 -- nvmf/common.sh@124 -- # set -e 00:22:20.256 14:59:20 -- nvmf/common.sh@125 -- # return 0 00:22:20.256 14:59:20 -- nvmf/common.sh@478 -- # '[' -n 278911 ']' 00:22:20.256 14:59:20 -- nvmf/common.sh@479 -- # killprocess 278911 00:22:20.256 14:59:20 -- common/autotest_common.sh@936 -- # '[' -z 278911 ']' 00:22:20.256 14:59:20 -- common/autotest_common.sh@940 -- # kill -0 278911 00:22:20.256 14:59:20 -- common/autotest_common.sh@941 -- # uname 00:22:20.256 14:59:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.256 14:59:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 278911 00:22:20.256 14:59:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:20.256 14:59:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:20.256 14:59:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 278911' 00:22:20.256 killing process with pid 278911 00:22:20.256 14:59:20 -- common/autotest_common.sh@955 -- # kill 278911 00:22:20.256 [2024-04-26 14:59:20.189582] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:20.256 14:59:20 -- common/autotest_common.sh@960 -- # wait 278911 00:22:20.823 [2024-04-26 14:59:20.737258] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:22.200 14:59:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:22.200 14:59:22 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:22.200 00:22:22.200 real 0m6.190s 00:22:22.200 user 0m13.016s 00:22:22.200 sys 0m2.169s 00:22:22.200 14:59:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:22.200 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:22.200 ************************************ 00:22:22.200 END TEST nvmf_identify 00:22:22.200 ************************************ 00:22:22.200 14:59:22 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:22.200 14:59:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:22.200 14:59:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:22.200 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:22.200 ************************************ 00:22:22.200 START TEST nvmf_perf 00:22:22.200 ************************************ 00:22:22.200 14:59:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:22.458 * Looking for test storage... 00:22:22.458 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:22.458 14:59:22 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.458 14:59:22 -- nvmf/common.sh@7 -- # uname -s 00:22:22.458 14:59:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.458 14:59:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.458 14:59:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.458 14:59:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.458 14:59:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.458 14:59:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.458 14:59:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.458 14:59:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.458 14:59:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.458 14:59:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.458 14:59:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:22.458 14:59:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:22.458 14:59:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.458 14:59:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.458 14:59:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.458 14:59:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.458 14:59:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:22.458 14:59:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.458 14:59:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.458 14:59:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.458 14:59:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.458 14:59:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.458 14:59:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.458 14:59:22 -- paths/export.sh@5 -- # export PATH 00:22:22.458 14:59:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.458 14:59:22 -- nvmf/common.sh@47 -- # : 0 00:22:22.458 14:59:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.458 14:59:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.458 14:59:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.458 14:59:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.458 14:59:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.458 14:59:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.458 14:59:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.458 14:59:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.458 14:59:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:22.459 14:59:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:22.459 14:59:22 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:22.459 14:59:22 -- host/perf.sh@17 -- # nvmftestinit 00:22:22.459 14:59:22 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:22.459 14:59:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.459 14:59:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:22.459 14:59:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:22.459 14:59:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:22.459 14:59:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.459 14:59:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.459 14:59:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.459 14:59:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:22.459 14:59:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:22.459 14:59:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.459 14:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.364 14:59:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:24.364 14:59:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.364 14:59:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.364 14:59:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.364 14:59:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.364 14:59:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.364 14:59:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.364 14:59:24 -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.364 14:59:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.364 14:59:24 -- nvmf/common.sh@296 -- # e810=() 00:22:24.364 14:59:24 -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.364 14:59:24 -- nvmf/common.sh@297 -- # x722=() 00:22:24.364 14:59:24 -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.364 14:59:24 -- nvmf/common.sh@298 -- # mlx=() 00:22:24.364 14:59:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.364 14:59:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.364 14:59:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.364 14:59:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.364 14:59:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.364 14:59:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.365 14:59:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:24.365 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:24.365 14:59:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:24.365 14:59:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:24.365 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:24.365 14:59:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:24.365 14:59:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.365 14:59:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.365 14:59:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:24.365 Found net devices under 0000:09:00.0: mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.365 14:59:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.365 14:59:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:24.365 Found net devices under 0000:09:00.1: mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.365 14:59:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:24.365 14:59:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:24.365 14:59:24 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:24.365 14:59:24 -- nvmf/common.sh@58 -- # uname 00:22:24.365 14:59:24 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:24.365 14:59:24 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:24.365 14:59:24 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:24.365 14:59:24 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:24.365 14:59:24 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:24.365 14:59:24 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:24.365 14:59:24 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:24.365 14:59:24 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:24.365 14:59:24 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:24.365 14:59:24 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:24.365 14:59:24 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:24.365 14:59:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:24.365 14:59:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:24.365 14:59:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:24.365 14:59:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:24.365 14:59:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@105 -- # continue 2 00:22:24.365 14:59:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@105 -- # continue 2 00:22:24.365 14:59:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:24.365 14:59:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.365 14:59:24 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:24.365 14:59:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:24.365 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:24.365 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:22:24.365 altname enp9s0f0np0 00:22:24.365 inet 192.168.100.8/24 scope global mlx_0_0 00:22:24.365 valid_lft forever preferred_lft forever 00:22:24.365 14:59:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:24.365 14:59:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.365 14:59:24 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:24.365 14:59:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:24.365 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:24.365 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:22:24.365 altname enp9s0f1np1 00:22:24.365 inet 192.168.100.9/24 scope global mlx_0_1 00:22:24.365 valid_lft forever preferred_lft forever 00:22:24.365 14:59:24 -- nvmf/common.sh@411 -- # return 0 00:22:24.365 14:59:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:24.365 14:59:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:24.365 14:59:24 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:24.365 14:59:24 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:24.365 14:59:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:24.365 14:59:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:24.365 14:59:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:24.365 14:59:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:24.365 14:59:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:24.365 14:59:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@105 -- # continue 2 00:22:24.365 14:59:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.365 14:59:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:24.365 14:59:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@105 -- # continue 2 00:22:24.365 14:59:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:24.365 14:59:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.365 14:59:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:24.365 14:59:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.365 14:59:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.365 14:59:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:24.365 192.168.100.9' 00:22:24.365 14:59:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:24.365 192.168.100.9' 00:22:24.365 14:59:24 -- nvmf/common.sh@446 -- # head -n 1 00:22:24.365 14:59:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:24.366 14:59:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:24.366 192.168.100.9' 00:22:24.366 14:59:24 -- nvmf/common.sh@447 -- # tail -n +2 00:22:24.366 14:59:24 -- nvmf/common.sh@447 -- # head -n 1 00:22:24.366 14:59:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:24.366 14:59:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:24.366 14:59:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:24.366 14:59:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:24.366 14:59:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:24.366 14:59:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:24.366 14:59:24 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:24.366 14:59:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:24.366 14:59:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:24.366 14:59:24 -- common/autotest_common.sh@10 -- # set +x 00:22:24.366 14:59:24 -- nvmf/common.sh@470 -- # nvmfpid=281125 00:22:24.366 14:59:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.366 14:59:24 -- nvmf/common.sh@471 -- # waitforlisten 281125 00:22:24.366 14:59:24 -- common/autotest_common.sh@817 -- # '[' -z 281125 ']' 00:22:24.366 14:59:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.366 14:59:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:24.366 14:59:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.366 14:59:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:24.366 14:59:24 -- common/autotest_common.sh@10 -- # set +x 00:22:24.366 [2024-04-26 14:59:24.336669] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:24.366 [2024-04-26 14:59:24.336791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.366 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.624 [2024-04-26 14:59:24.457480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.883 [2024-04-26 14:59:24.709452] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.883 [2024-04-26 14:59:24.709531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.883 [2024-04-26 14:59:24.709562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.883 [2024-04-26 14:59:24.709585] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.883 [2024-04-26 14:59:24.709604] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.883 [2024-04-26 14:59:24.709752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.883 [2024-04-26 14:59:24.709821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.884 [2024-04-26 14:59:24.709903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.884 [2024-04-26 14:59:24.709909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.450 14:59:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:25.450 14:59:25 -- common/autotest_common.sh@850 -- # return 0 00:22:25.450 14:59:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:25.450 14:59:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:25.450 14:59:25 -- common/autotest_common.sh@10 -- # set +x 00:22:25.450 14:59:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.450 14:59:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:25.450 14:59:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:28.738 14:59:28 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:28.738 14:59:28 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:28.738 14:59:28 -- host/perf.sh@30 -- # local_nvme_trid=0000:81:00.0 00:22:28.738 14:59:28 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.996 14:59:29 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:28.996 14:59:29 -- host/perf.sh@33 -- # '[' -n 0000:81:00.0 ']' 00:22:28.996 14:59:29 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:28.996 14:59:29 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:28.996 14:59:29 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:29.254 [2024-04-26 14:59:29.234407] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:29.254 [2024-04-26 14:59:29.259822] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5457f3b940) succeed. 00:22:29.254 [2024-04-26 14:59:29.271100] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5457ef7940) succeed. 00:22:29.512 14:59:29 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.769 14:59:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:29.769 14:59:29 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.026 14:59:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.026 14:59:29 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:30.284 14:59:30 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:30.542 [2024-04-26 14:59:30.436187] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:30.542 14:59:30 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:30.800 14:59:30 -- host/perf.sh@52 -- # '[' -n 0000:81:00.0 ']' 00:22:30.800 14:59:30 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:22:30.800 14:59:30 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:30.800 14:59:30 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:81:00.0' 00:22:32.175 Initializing NVMe Controllers 00:22:32.175 Attached to NVMe Controller at 0000:81:00.0 [8086:0a54] 00:22:32.175 Associating PCIE (0000:81:00.0) NSID 1 with lcore 0 00:22:32.175 Initialization complete. Launching workers. 00:22:32.175 ======================================================== 00:22:32.175 Latency(us) 00:22:32.175 Device Information : IOPS MiB/s Average min max 00:22:32.175 PCIE (0000:81:00.0) NSID 1 from core 0: 75506.13 294.95 423.05 42.19 4412.16 00:22:32.175 ======================================================== 00:22:32.175 Total : 75506.13 294.95 423.05 42.19 4412.16 00:22:32.175 00:22:32.175 14:59:32 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:32.175 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.364 Initializing NVMe Controllers 00:22:36.364 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.364 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.364 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:36.364 Initialization complete. Launching workers. 00:22:36.364 ======================================================== 00:22:36.364 Latency(us) 00:22:36.364 Device Information : IOPS MiB/s Average min max 00:22:36.364 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4295.66 16.78 230.49 89.38 4158.35 00:22:36.364 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3598.04 14.05 277.52 109.85 4182.72 00:22:36.364 ======================================================== 00:22:36.365 Total : 7893.71 30.83 251.93 89.38 4182.72 00:22:36.365 00:22:36.365 14:59:35 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:36.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.652 Initializing NVMe Controllers 00:22:39.652 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.652 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:39.652 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:39.652 Initialization complete. Launching workers. 00:22:39.652 ======================================================== 00:22:39.652 Latency(us) 00:22:39.652 Device Information : IOPS MiB/s Average min max 00:22:39.652 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10830.81 42.31 2953.53 842.56 6618.51 00:22:39.652 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.83 15.69 7961.84 5839.16 9106.64 00:22:39.652 ======================================================== 00:22:39.652 Total : 14848.64 58.00 4308.71 842.56 9106.64 00:22:39.652 00:22:39.652 14:59:39 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:22:39.652 14:59:39 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:39.652 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.929 Initializing NVMe Controllers 00:22:44.929 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.929 Controller IO queue size 128, less than required. 00:22:44.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.929 Controller IO queue size 128, less than required. 00:22:44.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.929 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.929 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:44.929 Initialization complete. Launching workers. 00:22:44.929 ======================================================== 00:22:44.929 Latency(us) 00:22:44.929 Device Information : IOPS MiB/s Average min max 00:22:44.929 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2325.50 581.37 57760.54 23287.07 340605.65 00:22:44.929 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2486.00 621.50 50078.88 22849.41 347961.26 00:22:44.929 ======================================================== 00:22:44.929 Total : 4811.50 1202.87 53791.59 22849.41 347961.26 00:22:44.929 00:22:44.929 14:59:44 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:22:44.929 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.929 No valid NVMe controllers or AIO or URING devices found 00:22:44.929 Initializing NVMe Controllers 00:22:44.929 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.929 Controller IO queue size 128, less than required. 00:22:44.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:44.929 Controller IO queue size 128, less than required. 00:22:44.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:44.929 WARNING: Some requested NVMe devices were skipped 00:22:44.929 14:59:44 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:22:44.929 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.202 Initializing NVMe Controllers 00:22:50.202 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.202 Controller IO queue size 128, less than required. 00:22:50.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.202 Controller IO queue size 128, less than required. 00:22:50.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:50.202 Initialization complete. Launching workers. 00:22:50.202 00:22:50.202 ==================== 00:22:50.202 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:50.202 RDMA transport: 00:22:50.202 dev name: mlx5_0 00:22:50.202 polls: 224702 00:22:50.202 idle_polls: 222214 00:22:50.202 completions: 26650 00:22:50.202 queued_requests: 1 00:22:50.202 total_send_wrs: 13325 00:22:50.202 send_doorbell_updates: 2171 00:22:50.202 total_recv_wrs: 13452 00:22:50.202 recv_doorbell_updates: 2175 00:22:50.202 --------------------------------- 00:22:50.202 00:22:50.202 ==================== 00:22:50.202 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:50.202 RDMA transport: 00:22:50.202 dev name: mlx5_0 00:22:50.202 polls: 225058 00:22:50.202 idle_polls: 224793 00:22:50.202 completions: 13742 00:22:50.202 queued_requests: 1 00:22:50.202 total_send_wrs: 6871 00:22:50.202 send_doorbell_updates: 239 00:22:50.202 total_recv_wrs: 6998 00:22:50.202 recv_doorbell_updates: 240 00:22:50.202 --------------------------------- 00:22:50.202 ======================================================== 00:22:50.202 Latency(us) 00:22:50.202 Device Information : IOPS MiB/s Average min max 00:22:50.202 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3330.99 832.75 38698.43 19018.65 188544.27 00:22:50.202 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1717.49 429.37 75326.60 43241.91 337801.20 00:22:50.202 ======================================================== 00:22:50.202 Total : 5048.48 1262.12 51159.34 19018.65 337801.20 00:22:50.202 00:22:50.202 14:59:49 -- host/perf.sh@66 -- # sync 00:22:50.202 14:59:49 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.202 14:59:49 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:50.202 14:59:49 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:50.202 14:59:49 -- host/perf.sh@114 -- # nvmftestfini 00:22:50.202 14:59:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:50.202 14:59:49 -- nvmf/common.sh@117 -- # sync 00:22:50.202 14:59:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:50.202 14:59:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:50.202 14:59:49 -- nvmf/common.sh@120 -- # set +e 00:22:50.202 14:59:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.202 14:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:50.203 rmmod nvme_rdma 00:22:50.203 rmmod nvme_fabrics 00:22:50.203 14:59:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.203 14:59:49 -- nvmf/common.sh@124 -- # set -e 00:22:50.203 14:59:49 -- nvmf/common.sh@125 -- # return 0 00:22:50.203 14:59:49 -- nvmf/common.sh@478 -- # '[' -n 281125 ']' 00:22:50.203 14:59:49 -- nvmf/common.sh@479 -- # killprocess 281125 00:22:50.203 14:59:49 -- common/autotest_common.sh@936 -- # '[' -z 281125 ']' 00:22:50.203 14:59:49 -- common/autotest_common.sh@940 -- # kill -0 281125 00:22:50.203 14:59:49 -- common/autotest_common.sh@941 -- # uname 00:22:50.203 14:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.203 14:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 281125 00:22:50.203 14:59:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.203 14:59:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.203 14:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 281125' 00:22:50.203 killing process with pid 281125 00:22:50.203 14:59:49 -- common/autotest_common.sh@955 -- # kill 281125 00:22:50.203 14:59:49 -- common/autotest_common.sh@960 -- # wait 281125 00:22:50.203 [2024-04-26 14:59:50.188100] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:54.386 14:59:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:54.386 14:59:53 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:54.386 00:22:54.386 real 0m31.367s 00:22:54.386 user 1m56.051s 00:22:54.386 sys 0m3.024s 00:22:54.386 14:59:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:54.386 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:22:54.386 ************************************ 00:22:54.386 END TEST nvmf_perf 00:22:54.386 ************************************ 00:22:54.386 14:59:53 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:54.386 14:59:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:54.386 14:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.386 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:22:54.386 ************************************ 00:22:54.386 START TEST nvmf_fio_host 00:22:54.386 ************************************ 00:22:54.386 14:59:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:54.386 * Looking for test storage... 00:22:54.386 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:54.386 14:59:53 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:54.386 14:59:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.386 14:59:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.386 14:59:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.386 14:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@5 -- # export PATH 00:22:54.386 14:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.386 14:59:53 -- nvmf/common.sh@7 -- # uname -s 00:22:54.386 14:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.386 14:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.386 14:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.386 14:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.386 14:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.386 14:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.386 14:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.386 14:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.386 14:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.386 14:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.386 14:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:22:54.386 14:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:22:54.386 14:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.386 14:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.386 14:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.386 14:59:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.386 14:59:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:54.386 14:59:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.386 14:59:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.386 14:59:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.386 14:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- paths/export.sh@5 -- # export PATH 00:22:54.386 14:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.386 14:59:53 -- nvmf/common.sh@47 -- # : 0 00:22:54.386 14:59:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.386 14:59:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.386 14:59:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.386 14:59:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.386 14:59:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.386 14:59:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.386 14:59:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.386 14:59:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.386 14:59:53 -- host/fio.sh@12 -- # nvmftestinit 00:22:54.386 14:59:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:54.386 14:59:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.386 14:59:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:54.386 14:59:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:54.386 14:59:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:54.386 14:59:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.386 14:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.386 14:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.386 14:59:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:54.387 14:59:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:54.387 14:59:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.387 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:22:55.796 14:59:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:55.796 14:59:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.796 14:59:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.796 14:59:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.796 14:59:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.796 14:59:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.796 14:59:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.796 14:59:55 -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.796 14:59:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.796 14:59:55 -- nvmf/common.sh@296 -- # e810=() 00:22:55.796 14:59:55 -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.796 14:59:55 -- nvmf/common.sh@297 -- # x722=() 00:22:55.796 14:59:55 -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.796 14:59:55 -- nvmf/common.sh@298 -- # mlx=() 00:22:55.796 14:59:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.796 14:59:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.796 14:59:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.796 14:59:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:55.796 14:59:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:55.796 14:59:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:55.796 14:59:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.796 14:59:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.796 14:59:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:22:55.796 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:22:55.796 14:59:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:55.796 14:59:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.796 14:59:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:22:55.796 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:22:55.796 14:59:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:55.796 14:59:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.796 14:59:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:55.796 14:59:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.796 14:59:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.796 14:59:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.796 14:59:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.796 14:59:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:22:55.796 Found net devices under 0000:09:00.0: mlx_0_0 00:22:55.796 14:59:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.796 14:59:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.797 14:59:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.797 14:59:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.797 14:59:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:22:55.797 Found net devices under 0000:09:00.1: mlx_0_1 00:22:55.797 14:59:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.797 14:59:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:55.797 14:59:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:55.797 14:59:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:55.797 14:59:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:55.797 14:59:55 -- nvmf/common.sh@58 -- # uname 00:22:55.797 14:59:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:55.797 14:59:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:55.797 14:59:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:55.797 14:59:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:55.797 14:59:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:55.797 14:59:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:55.797 14:59:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:55.797 14:59:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:55.797 14:59:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:55.797 14:59:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:55.797 14:59:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:55.797 14:59:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:55.797 14:59:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:55.797 14:59:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:55.797 14:59:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:55.797 14:59:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:55.797 14:59:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:55.797 14:59:55 -- nvmf/common.sh@105 -- # continue 2 00:22:55.797 14:59:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.797 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:55.797 14:59:55 -- nvmf/common.sh@105 -- # continue 2 00:22:55.797 14:59:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:55.797 14:59:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:55.797 14:59:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.797 14:59:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:55.797 14:59:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:55.797 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:55.797 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:22:55.797 altname enp9s0f0np0 00:22:55.797 inet 192.168.100.8/24 scope global mlx_0_0 00:22:55.797 valid_lft forever preferred_lft forever 00:22:55.797 14:59:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:55.797 14:59:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:55.797 14:59:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.797 14:59:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.797 14:59:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:55.797 14:59:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:55.797 14:59:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:55.797 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:55.797 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:22:55.797 altname enp9s0f1np1 00:22:55.797 inet 192.168.100.9/24 scope global mlx_0_1 00:22:55.797 valid_lft forever preferred_lft forever 00:22:55.797 14:59:55 -- nvmf/common.sh@411 -- # return 0 00:22:55.797 14:59:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:55.797 14:59:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:55.797 14:59:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:56.055 14:59:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:56.055 14:59:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:56.055 14:59:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:56.055 14:59:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:56.055 14:59:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:56.055 14:59:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:56.055 14:59:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:56.055 14:59:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.055 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.055 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:56.055 14:59:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:56.055 14:59:55 -- nvmf/common.sh@105 -- # continue 2 00:22:56.055 14:59:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.055 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.055 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:56.055 14:59:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.055 14:59:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:56.055 14:59:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:56.055 14:59:55 -- nvmf/common.sh@105 -- # continue 2 00:22:56.055 14:59:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:56.055 14:59:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:56.055 14:59:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:56.055 14:59:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:56.055 14:59:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.055 14:59:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.056 14:59:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:56.056 14:59:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:56.056 14:59:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:56.056 14:59:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:56.056 14:59:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.056 14:59:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.056 14:59:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:56.056 192.168.100.9' 00:22:56.056 14:59:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:56.056 192.168.100.9' 00:22:56.056 14:59:55 -- nvmf/common.sh@446 -- # head -n 1 00:22:56.056 14:59:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:56.056 14:59:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:56.056 192.168.100.9' 00:22:56.056 14:59:55 -- nvmf/common.sh@447 -- # tail -n +2 00:22:56.056 14:59:55 -- nvmf/common.sh@447 -- # head -n 1 00:22:56.056 14:59:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:56.056 14:59:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:56.056 14:59:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:56.056 14:59:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:56.056 14:59:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:56.056 14:59:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:56.056 14:59:55 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:56.056 14:59:55 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:56.056 14:59:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:56.056 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.056 14:59:55 -- host/fio.sh@22 -- # nvmfpid=286274 00:22:56.056 14:59:55 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:56.056 14:59:55 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.056 14:59:55 -- host/fio.sh@26 -- # waitforlisten 286274 00:22:56.056 14:59:55 -- common/autotest_common.sh@817 -- # '[' -z 286274 ']' 00:22:56.056 14:59:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.056 14:59:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:56.056 14:59:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.056 14:59:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:56.056 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:56.056 [2024-04-26 14:59:56.007852] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:56.056 [2024-04-26 14:59:56.007982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.314 [2024-04-26 14:59:56.140396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.314 [2024-04-26 14:59:56.384246] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.314 [2024-04-26 14:59:56.384320] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.314 [2024-04-26 14:59:56.384348] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.314 [2024-04-26 14:59:56.384373] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.314 [2024-04-26 14:59:56.384392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.314 [2024-04-26 14:59:56.384512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.314 [2024-04-26 14:59:56.384580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.314 [2024-04-26 14:59:56.384674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.314 [2024-04-26 14:59:56.384680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.882 14:59:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.882 14:59:56 -- common/autotest_common.sh@850 -- # return 0 00:22:56.882 14:59:56 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:56.882 14:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.882 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.142 [2024-04-26 14:59:56.970990] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028240/0x7f09d74d6940) succeed. 00:22:57.142 [2024-04-26 14:59:56.982228] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000283c0/0x7f09d7492940) succeed. 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:57.414 14:59:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 14:59:57 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:57.414 14:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 Malloc1 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.414 14:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:57.414 14:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:57.414 14:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 [2024-04-26 14:59:57.422221] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:57.414 14:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.414 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 14:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.414 14:59:57 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:57.414 14:59:57 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:57.414 14:59:57 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:57.414 14:59:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:57.414 14:59:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:57.414 14:59:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:57.414 14:59:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.414 14:59:57 -- common/autotest_common.sh@1327 -- # shift 00:22:57.414 14:59:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:57.414 14:59:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.414 14:59:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.414 14:59:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:57.414 14:59:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:57.414 14:59:57 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:57.414 14:59:57 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:57.414 14:59:57 -- common/autotest_common.sh@1333 -- # break 00:22:57.414 14:59:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:57.414 14:59:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:57.689 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:57.689 fio-3.35 00:22:57.689 Starting 1 thread 00:22:57.963 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.515 00:23:00.515 test: (groupid=0, jobs=1): err= 0: pid=286622: Fri Apr 26 15:00:00 2024 00:23:00.515 read: IOPS=9904, BW=38.7MiB/s (40.6MB/s)(77.6MiB/2006msec) 00:23:00.515 slat (nsec): min=2240, max=43512, avg=2736.60, stdev=1316.86 00:23:00.516 clat (usec): min=2487, max=11953, avg=6421.30, stdev=285.93 00:23:00.516 lat (usec): min=2512, max=11956, avg=6424.04, stdev=285.92 00:23:00.516 clat percentiles (usec): 00:23:00.516 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6259], 20.00th=[ 6325], 00:23:00.516 | 30.00th=[ 6325], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6390], 00:23:00.516 | 70.00th=[ 6456], 80.00th=[ 6521], 90.00th=[ 6652], 95.00th=[ 6915], 00:23:00.516 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[10028], 99.95th=[11731], 00:23:00.516 | 99.99th=[11994] 00:23:00.516 bw ( KiB/s): min=38864, max=40392, per=100.00%, avg=39624.00, stdev=731.55, samples=4 00:23:00.516 iops : min= 9716, max=10098, avg=9906.00, stdev=182.89, samples=4 00:23:00.516 write: IOPS=9926, BW=38.8MiB/s (40.7MB/s)(77.8MiB/2006msec); 0 zone resets 00:23:00.516 slat (nsec): min=2319, max=40849, avg=3027.50, stdev=1551.69 00:23:00.516 clat (usec): min=2517, max=12635, avg=6422.77, stdev=311.46 00:23:00.516 lat (usec): min=2528, max=12638, avg=6425.80, stdev=311.47 00:23:00.516 clat percentiles (usec): 00:23:00.516 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6259], 20.00th=[ 6325], 00:23:00.516 | 30.00th=[ 6325], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6390], 00:23:00.516 | 70.00th=[ 6456], 80.00th=[ 6521], 90.00th=[ 6652], 95.00th=[ 6915], 00:23:00.516 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[10290], 99.95th=[11863], 00:23:00.516 | 99.99th=[12649] 00:23:00.516 bw ( KiB/s): min=39216, max=40184, per=99.94%, avg=39682.00, stdev=400.07, samples=4 00:23:00.516 iops : min= 9804, max=10046, avg=9920.50, stdev=100.02, samples=4 00:23:00.516 lat (msec) : 4=0.02%, 10=99.85%, 20=0.13% 00:23:00.516 cpu : usr=99.00%, sys=0.40%, ctx=15, majf=0, minf=1529 00:23:00.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:00.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:00.516 issued rwts: total=19868,19912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:00.516 00:23:00.516 Run status group 0 (all jobs): 00:23:00.516 READ: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=77.6MiB (81.4MB), run=2006-2006msec 00:23:00.516 WRITE: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=77.8MiB (81.6MB), run=2006-2006msec 00:23:00.516 ----------------------------------------------------- 00:23:00.516 Suppressions used: 00:23:00.516 count bytes template 00:23:00.516 1 63 /usr/src/fio/parse.c 00:23:00.516 1 8 libtcmalloc_minimal.so 00:23:00.516 ----------------------------------------------------- 00:23:00.516 00:23:00.516 15:00:00 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:00.516 15:00:00 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:00.516 15:00:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:00.516 15:00:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.516 15:00:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:00.516 15:00:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.516 15:00:00 -- common/autotest_common.sh@1327 -- # shift 00:23:00.516 15:00:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:00.516 15:00:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.516 15:00:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.516 15:00:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:00.516 15:00:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:00.516 15:00:00 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:00.516 15:00:00 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:00.516 15:00:00 -- common/autotest_common.sh@1333 -- # break 00:23:00.516 15:00:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:00.516 15:00:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:00.775 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:00.775 fio-3.35 00:23:00.775 Starting 1 thread 00:23:01.035 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.555 00:23:03.555 test: (groupid=0, jobs=1): err= 0: pid=287025: Fri Apr 26 15:00:03 2024 00:23:03.555 read: IOPS=7640, BW=119MiB/s (125MB/s)(238MiB/1996msec) 00:23:03.555 slat (nsec): min=3529, max=79643, avg=4819.73, stdev=2420.71 00:23:03.555 clat (usec): min=470, max=14594, avg=2975.17, stdev=1722.11 00:23:03.555 lat (usec): min=476, max=14598, avg=2979.99, stdev=1722.45 00:23:03.555 clat percentiles (usec): 00:23:03.555 | 1.00th=[ 1037], 5.00th=[ 1467], 10.00th=[ 1729], 20.00th=[ 2024], 00:23:03.556 | 30.00th=[ 2245], 40.00th=[ 2409], 50.00th=[ 2606], 60.00th=[ 2835], 00:23:03.556 | 70.00th=[ 3097], 80.00th=[ 3425], 90.00th=[ 3982], 95.00th=[ 5604], 00:23:03.556 | 99.00th=[11600], 99.50th=[13042], 99.90th=[14222], 99.95th=[14353], 00:23:03.556 | 99.99th=[14615] 00:23:03.556 bw ( KiB/s): min=55936, max=66080, per=49.52%, avg=60536.00, stdev=4968.86, samples=4 00:23:03.556 iops : min= 3496, max= 4130, avg=3783.50, stdev=310.55, samples=4 00:23:03.556 write: IOPS=4114, BW=64.3MiB/s (67.4MB/s)(124MiB/1923msec); 0 zone resets 00:23:03.556 slat (nsec): min=33033, max=98053, avg=37815.86, stdev=6230.15 00:23:03.556 clat (usec): min=7776, max=35261, avg=25167.30, stdev=3220.72 00:23:03.556 lat (usec): min=7810, max=35297, avg=25205.11, stdev=3220.83 00:23:03.556 clat percentiles (usec): 00:23:03.556 | 1.00th=[14353], 5.00th=[20055], 10.00th=[21365], 20.00th=[22938], 00:23:03.556 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25297], 60.00th=[25822], 00:23:03.556 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28705], 95.00th=[30278], 00:23:03.556 | 99.00th=[31851], 99.50th=[32637], 99.90th=[34341], 99.95th=[34866], 00:23:03.556 | 99.99th=[35390] 00:23:03.556 bw ( KiB/s): min=58816, max=66848, per=95.02%, avg=62552.00, stdev=3441.29, samples=4 00:23:03.556 iops : min= 3676, max= 4178, avg=3909.50, stdev=215.08, samples=4 00:23:03.556 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.45% 00:23:03.556 lat (msec) : 2=12.00%, 4=46.80%, 10=5.37%, 20=2.72%, 50=32.60% 00:23:03.556 cpu : usr=97.41%, sys=1.34%, ctx=100, majf=0, minf=5657 00:23:03.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:03.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.556 issued rwts: total=15251,7912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.556 00:23:03.556 Run status group 0 (all jobs): 00:23:03.556 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=238MiB (250MB), run=1996-1996msec 00:23:03.556 WRITE: bw=64.3MiB/s (67.4MB/s), 64.3MiB/s-64.3MiB/s (67.4MB/s-67.4MB/s), io=124MiB (130MB), run=1923-1923msec 00:23:03.556 ----------------------------------------------------- 00:23:03.556 Suppressions used: 00:23:03.556 count bytes template 00:23:03.556 1 63 /usr/src/fio/parse.c 00:23:03.556 230 22080 /usr/src/fio/iolog.c 00:23:03.556 1 8 libtcmalloc_minimal.so 00:23:03.556 ----------------------------------------------------- 00:23:03.556 00:23:03.556 15:00:03 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.556 15:00:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.556 15:00:03 -- common/autotest_common.sh@10 -- # set +x 00:23:03.556 15:00:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.556 15:00:03 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:23:03.556 15:00:03 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:23:03.556 15:00:03 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:23:03.556 15:00:03 -- host/fio.sh@84 -- # nvmftestfini 00:23:03.556 15:00:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:03.556 15:00:03 -- nvmf/common.sh@117 -- # sync 00:23:03.556 15:00:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:03.556 15:00:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:03.556 15:00:03 -- nvmf/common.sh@120 -- # set +e 00:23:03.556 15:00:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.556 15:00:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:03.556 rmmod nvme_rdma 00:23:03.556 rmmod nvme_fabrics 00:23:03.556 15:00:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.556 15:00:03 -- nvmf/common.sh@124 -- # set -e 00:23:03.556 15:00:03 -- nvmf/common.sh@125 -- # return 0 00:23:03.556 15:00:03 -- nvmf/common.sh@478 -- # '[' -n 286274 ']' 00:23:03.556 15:00:03 -- nvmf/common.sh@479 -- # killprocess 286274 00:23:03.556 15:00:03 -- common/autotest_common.sh@936 -- # '[' -z 286274 ']' 00:23:03.556 15:00:03 -- common/autotest_common.sh@940 -- # kill -0 286274 00:23:03.556 15:00:03 -- common/autotest_common.sh@941 -- # uname 00:23:03.556 15:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.556 15:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 286274 00:23:03.556 15:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.556 15:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.556 15:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 286274' 00:23:03.556 killing process with pid 286274 00:23:03.556 15:00:03 -- common/autotest_common.sh@955 -- # kill 286274 00:23:03.556 15:00:03 -- common/autotest_common.sh@960 -- # wait 286274 00:23:04.124 [2024-04-26 15:00:04.038248] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:05.494 15:00:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:05.494 15:00:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:05.494 00:23:05.494 real 0m11.703s 00:23:05.494 user 0m40.506s 00:23:05.494 sys 0m2.740s 00:23:05.494 15:00:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.494 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:05.494 ************************************ 00:23:05.494 END TEST nvmf_fio_host 00:23:05.494 ************************************ 00:23:05.494 15:00:05 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:05.494 15:00:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:05.494 15:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.494 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:05.752 ************************************ 00:23:05.752 START TEST nvmf_failover 00:23:05.752 ************************************ 00:23:05.752 15:00:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:05.752 * Looking for test storage... 00:23:05.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:05.752 15:00:05 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.752 15:00:05 -- nvmf/common.sh@7 -- # uname -s 00:23:05.752 15:00:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.752 15:00:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.752 15:00:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.752 15:00:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.752 15:00:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.752 15:00:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.752 15:00:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.752 15:00:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.752 15:00:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.752 15:00:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.752 15:00:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:05.752 15:00:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:05.752 15:00:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.752 15:00:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.752 15:00:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.752 15:00:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.752 15:00:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:05.752 15:00:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.752 15:00:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.752 15:00:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.752 15:00:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.752 15:00:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.752 15:00:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.752 15:00:05 -- paths/export.sh@5 -- # export PATH 00:23:05.752 15:00:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.752 15:00:05 -- nvmf/common.sh@47 -- # : 0 00:23:05.752 15:00:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.752 15:00:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.752 15:00:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.752 15:00:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.752 15:00:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.752 15:00:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.752 15:00:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.752 15:00:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.752 15:00:05 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.752 15:00:05 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.752 15:00:05 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:05.752 15:00:05 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.752 15:00:05 -- host/failover.sh@18 -- # nvmftestinit 00:23:05.752 15:00:05 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:05.752 15:00:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.752 15:00:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:05.752 15:00:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:05.752 15:00:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:05.752 15:00:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.752 15:00:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.753 15:00:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.753 15:00:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:05.753 15:00:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:05.753 15:00:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.753 15:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:07.654 15:00:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:07.654 15:00:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.654 15:00:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.654 15:00:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.654 15:00:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.654 15:00:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.654 15:00:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.654 15:00:07 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.654 15:00:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.654 15:00:07 -- nvmf/common.sh@296 -- # e810=() 00:23:07.654 15:00:07 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.654 15:00:07 -- nvmf/common.sh@297 -- # x722=() 00:23:07.654 15:00:07 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.654 15:00:07 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.654 15:00:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.654 15:00:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.654 15:00:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.654 15:00:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.654 15:00:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:23:07.654 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:23:07.654 15:00:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.654 15:00:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.654 15:00:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:23:07.654 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:23:07.654 15:00:07 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.654 15:00:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.654 15:00:07 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.654 15:00:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.654 15:00:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.654 15:00:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.654 15:00:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:23:07.654 Found net devices under 0000:09:00.0: mlx_0_0 00:23:07.654 15:00:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.654 15:00:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.654 15:00:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.654 15:00:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.654 15:00:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:23:07.654 Found net devices under 0000:09:00.1: mlx_0_1 00:23:07.654 15:00:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.654 15:00:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:07.654 15:00:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:07.654 15:00:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:07.654 15:00:07 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:07.654 15:00:07 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:07.654 15:00:07 -- nvmf/common.sh@58 -- # uname 00:23:07.654 15:00:07 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:07.654 15:00:07 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:07.654 15:00:07 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:07.654 15:00:07 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:07.654 15:00:07 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:07.654 15:00:07 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:07.654 15:00:07 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:07.654 15:00:07 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:07.654 15:00:07 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:07.654 15:00:07 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:07.655 15:00:07 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:07.655 15:00:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.655 15:00:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.655 15:00:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.655 15:00:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.655 15:00:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.655 15:00:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@105 -- # continue 2 00:23:07.655 15:00:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@105 -- # continue 2 00:23:07.655 15:00:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.655 15:00:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.655 15:00:07 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:07.655 15:00:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:07.655 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.655 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:23:07.655 altname enp9s0f0np0 00:23:07.655 inet 192.168.100.8/24 scope global mlx_0_0 00:23:07.655 valid_lft forever preferred_lft forever 00:23:07.655 15:00:07 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.655 15:00:07 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.655 15:00:07 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:07.655 15:00:07 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:07.655 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.655 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:23:07.655 altname enp9s0f1np1 00:23:07.655 inet 192.168.100.9/24 scope global mlx_0_1 00:23:07.655 valid_lft forever preferred_lft forever 00:23:07.655 15:00:07 -- nvmf/common.sh@411 -- # return 0 00:23:07.655 15:00:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:07.655 15:00:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:07.655 15:00:07 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:07.655 15:00:07 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:07.655 15:00:07 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.655 15:00:07 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.655 15:00:07 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.655 15:00:07 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.655 15:00:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.655 15:00:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@105 -- # continue 2 00:23:07.655 15:00:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.655 15:00:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.655 15:00:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@105 -- # continue 2 00:23:07.655 15:00:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.655 15:00:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.655 15:00:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.655 15:00:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.655 15:00:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.655 15:00:07 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:07.655 192.168.100.9' 00:23:07.655 15:00:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:07.655 192.168.100.9' 00:23:07.655 15:00:07 -- nvmf/common.sh@446 -- # head -n 1 00:23:07.655 15:00:07 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:07.655 15:00:07 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:07.655 192.168.100.9' 00:23:07.655 15:00:07 -- nvmf/common.sh@447 -- # tail -n +2 00:23:07.655 15:00:07 -- nvmf/common.sh@447 -- # head -n 1 00:23:07.655 15:00:07 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:07.655 15:00:07 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:07.655 15:00:07 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:07.655 15:00:07 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:07.655 15:00:07 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:07.655 15:00:07 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:07.655 15:00:07 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:07.655 15:00:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:07.655 15:00:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:07.655 15:00:07 -- common/autotest_common.sh@10 -- # set +x 00:23:07.655 15:00:07 -- nvmf/common.sh@470 -- # nvmfpid=289383 00:23:07.655 15:00:07 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:07.655 15:00:07 -- nvmf/common.sh@471 -- # waitforlisten 289383 00:23:07.655 15:00:07 -- common/autotest_common.sh@817 -- # '[' -z 289383 ']' 00:23:07.655 15:00:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.655 15:00:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.655 15:00:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.655 15:00:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.655 15:00:07 -- common/autotest_common.sh@10 -- # set +x 00:23:07.914 [2024-04-26 15:00:07.764905] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:07.914 [2024-04-26 15:00:07.765032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.914 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.914 [2024-04-26 15:00:07.890043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:08.172 [2024-04-26 15:00:08.146698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.172 [2024-04-26 15:00:08.146766] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.172 [2024-04-26 15:00:08.146791] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.172 [2024-04-26 15:00:08.146815] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.172 [2024-04-26 15:00:08.146834] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.172 [2024-04-26 15:00:08.146979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.172 [2024-04-26 15:00:08.147076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.172 [2024-04-26 15:00:08.147078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.737 15:00:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.737 15:00:08 -- common/autotest_common.sh@850 -- # return 0 00:23:08.737 15:00:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.737 15:00:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:08.737 15:00:08 -- common/autotest_common.sh@10 -- # set +x 00:23:08.737 15:00:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.737 15:00:08 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:08.995 [2024-04-26 15:00:08.958709] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7fe9841bd940) succeed. 00:23:08.995 [2024-04-26 15:00:08.969534] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7fe984179940) succeed. 00:23:09.253 15:00:09 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:09.512 Malloc0 00:23:09.512 15:00:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.770 15:00:09 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.028 15:00:10 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:10.285 [2024-04-26 15:00:10.288010] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:10.285 15:00:10 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:10.543 [2024-04-26 15:00:10.528690] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:10.543 15:00:10 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:10.800 [2024-04-26 15:00:10.773608] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:10.800 15:00:10 -- host/failover.sh@31 -- # bdevperf_pid=290186 00:23:10.800 15:00:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:10.800 15:00:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.800 15:00:10 -- host/failover.sh@34 -- # waitforlisten 290186 /var/tmp/bdevperf.sock 00:23:10.800 15:00:10 -- common/autotest_common.sh@817 -- # '[' -z 290186 ']' 00:23:10.800 15:00:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.800 15:00:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.800 15:00:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.800 15:00:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.800 15:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:11.734 15:00:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.734 15:00:11 -- common/autotest_common.sh@850 -- # return 0 00:23:11.734 15:00:11 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.300 NVMe0n1 00:23:12.300 15:00:12 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.576 00:23:12.576 15:00:12 -- host/failover.sh@39 -- # run_test_pid=290379 00:23:12.576 15:00:12 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.576 15:00:12 -- host/failover.sh@41 -- # sleep 1 00:23:13.514 15:00:13 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:13.772 15:00:13 -- host/failover.sh@45 -- # sleep 3 00:23:17.067 15:00:16 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.067 00:23:17.067 15:00:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:17.324 15:00:17 -- host/failover.sh@50 -- # sleep 3 00:23:20.606 15:00:20 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:20.606 [2024-04-26 15:00:20.475982] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:20.606 15:00:20 -- host/failover.sh@55 -- # sleep 1 00:23:21.539 15:00:21 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:21.797 15:00:21 -- host/failover.sh@59 -- # wait 290379 00:23:28.363 0 00:23:28.363 15:00:27 -- host/failover.sh@61 -- # killprocess 290186 00:23:28.363 15:00:27 -- common/autotest_common.sh@936 -- # '[' -z 290186 ']' 00:23:28.363 15:00:27 -- common/autotest_common.sh@940 -- # kill -0 290186 00:23:28.363 15:00:27 -- common/autotest_common.sh@941 -- # uname 00:23:28.363 15:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:28.363 15:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 290186 00:23:28.363 15:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:28.363 15:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:28.363 15:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 290186' 00:23:28.363 killing process with pid 290186 00:23:28.363 15:00:27 -- common/autotest_common.sh@955 -- # kill 290186 00:23:28.363 15:00:27 -- common/autotest_common.sh@960 -- # wait 290186 00:23:28.635 15:00:28 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.635 [2024-04-26 15:00:10.868203] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:28.635 [2024-04-26 15:00:10.868370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290186 ] 00:23:28.635 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.635 [2024-04-26 15:00:10.991658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.635 [2024-04-26 15:00:11.217784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.635 Running I/O for 15 seconds... 00:23:28.635 [2024-04-26 15:00:14.653947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.635 [2024-04-26 15:00:14.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x18bd00 00:23:28.635 [2024-04-26 15:00:14.654475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.654997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.655044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.655068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.655093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.655142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.655171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x18bd00 00:23:28.636 [2024-04-26 15:00:14.655202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.636 [2024-04-26 15:00:14.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x18bd00 00:23:28.637 [2024-04-26 15:00:14.655819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.637 [2024-04-26 15:00:14.655843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.655868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.655893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.655918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.655942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.655966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.638 [2024-04-26 15:00:14.656434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x18bd00 00:23:28.638 [2024-04-26 15:00:14.656463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ed000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ef000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f1000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f5000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f7000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.656964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.640 [2024-04-26 15:00:14.656988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f9000 len:0x1000 key:0x18bd00 00:23:28.640 [2024-04-26 15:00:14.657015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x18bd00 00:23:28.641 [2024-04-26 15:00:14.657065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x18bd00 00:23:28.641 [2024-04-26 15:00:14.657113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.641 [2024-04-26 15:00:14.657728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.641 [2024-04-26 15:00:14.657752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.657776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.657822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.657846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.657870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.657894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.657919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.657943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.657966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.657990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.642 [2024-04-26 15:00:14.658408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.642 [2024-04-26 15:00:14.658447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.643 [2024-04-26 15:00:14.658975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.643 [2024-04-26 15:00:14.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.648 [2024-04-26 15:00:14.659022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.648 [2024-04-26 15:00:14.659046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.648 [2024-04-26 15:00:14.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.650 [2024-04-26 15:00:14.659797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.650 [2024-04-26 15:00:14.659821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.659844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.659891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.659915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.659938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.659962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.659986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.660574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.651 [2024-04-26 15:00:14.660598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.662227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:28.651 [2024-04-26 15:00:14.662260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:28.651 [2024-04-26 15:00:14.662282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93728 len:8 PRP1 0x0 PRP2 0x0 00:23:28.651 [2024-04-26 15:00:14.662312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.651 [2024-04-26 15:00:14.662504] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff180 was disconnected and freed. reset controller. 00:23:28.651 [2024-04-26 15:00:14.662537] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:28.651 [2024-04-26 15:00:14.662562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.651 [2024-04-26 15:00:14.666492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.652 [2024-04-26 15:00:14.702303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:28.652 [2024-04-26 15:00:14.751569] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:28.652 [2024-04-26 15:00:18.235101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.652 [2024-04-26 15:00:18.235232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.235264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.652 [2024-04-26 15:00:18.235291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.235312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.652 [2024-04-26 15:00:18.235335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.235355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.652 [2024-04-26 15:00:18.235379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:28.652 [2024-04-26 15:00:18.237090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.652 [2024-04-26 15:00:18.237117] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:23:28.652 [2024-04-26 15:00:18.237172] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:28.652 [2024-04-26 15:00:18.237215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x18bd00 00:23:28.652 [2024-04-26 15:00:18.237256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.237911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.652 [2024-04-26 15:00:18.237972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.652 [2024-04-26 15:00:18.238008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.238908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.238969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.653 [2024-04-26 15:00:18.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.653 [2024-04-26 15:00:18.239707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.654 [2024-04-26 15:00:18.239768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0x18bd00 00:23:28.654 [2024-04-26 15:00:18.239800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.654 [2024-04-26 15:00:18.239861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0x18bd00 00:23:28.654 [2024-04-26 15:00:18.239891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.654 [2024-04-26 15:00:18.239953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x18bd00 00:23:28.654 [2024-04-26 15:00:18.239987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.654 [2024-04-26 15:00:18.240050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0x18bd00 00:23:28.654 [2024-04-26 15:00:18.240081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0x18bd00 00:23:28.655 [2024-04-26 15:00:18.240214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x18bd00 00:23:28.655 [2024-04-26 15:00:18.240311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x18bd00 00:23:28.655 [2024-04-26 15:00:18.240408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x18bd00 00:23:28.655 [2024-04-26 15:00:18.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.240622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.240810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.240902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.240973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.241005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.241064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.241095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.655 [2024-04-26 15:00:18.241189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.655 [2024-04-26 15:00:18.241222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.656 [2024-04-26 15:00:18.241317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.241914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.241975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ed000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x18bd00 00:23:28.656 [2024-04-26 15:00:18.242641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.656 [2024-04-26 15:00:18.242702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.242734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.242794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.242828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.242926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.242986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x18bd00 00:23:28.657 [2024-04-26 15:00:18.243725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.657 [2024-04-26 15:00:18.243817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.657 [2024-04-26 15:00:18.243908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.657 [2024-04-26 15:00:18.243969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.657 [2024-04-26 15:00:18.244005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.244950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.244982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.245042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.245074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.245158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.245192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.658 [2024-04-26 15:00:18.245254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.658 [2024-04-26 15:00:18.245293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.245923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.245953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.246926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.246986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.247969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.247998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.659 [2024-04-26 15:00:18.248748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.659 [2024-04-26 15:00:18.248807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.248834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.248892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.248984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.249635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:18.249662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.284865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:28.660 [2024-04-26 15:00:18.284901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:28.660 [2024-04-26 15:00:18.284923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:23:28.660 [2024-04-26 15:00:18.284944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:18.285236] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000137ff180 was disconnected and freed. reset controller. 00:23:28.660 [2024-04-26 15:00:18.285266] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.660 [2024-04-26 15:00:18.285323] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:28.660 [2024-04-26 15:00:18.289167] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.660 [2024-04-26 15:00:18.344805] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:28.660 [2024-04-26 15:00:22.731635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.731712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.731791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.731836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.731925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.731970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.731993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.660 [2024-04-26 15:00:22.732951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.732974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ed000 len:0x1000 key:0x18bd00 00:23:28.660 [2024-04-26 15:00:22.732994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.660 [2024-04-26 15:00:22.733017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.733705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753f000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007541000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007543000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007545000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007547000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.733941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007549000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.733982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754b000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754d000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.661 [2024-04-26 15:00:22.734485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755d000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.661 [2024-04-26 15:00:22.734898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755b000 len:0x1000 key:0x18bd00 00:23:28.661 [2024-04-26 15:00:22.734918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.734940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007559000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.734960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.734983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007557000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007555000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007553000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007551000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754f000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755f000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007561000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007563000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007565000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007567000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007569000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756b000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756d000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.735926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.735969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.735992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x18bd00 00:23:28.662 [2024-04-26 15:00:22.736691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.736970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.662 [2024-04-26 15:00:22.736990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.662 [2024-04-26 15:00:22.737011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.663 [2024-04-26 15:00:22.737032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x18bd00 00:23:28.663 [2024-04-26 15:00:22.737436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.663 [2024-04-26 15:00:22.737490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.663 [2024-04-26 15:00:22.737531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.663 [2024-04-26 15:00:22.737573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.737595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:28.663 [2024-04-26 15:00:22.737615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.739239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:28.663 [2024-04-26 15:00:22.739269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:28.663 [2024-04-26 15:00:22.739289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74600 len:8 PRP1 0x0 PRP2 0x0 00:23:28.663 [2024-04-26 15:00:22.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.663 [2024-04-26 15:00:22.739498] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000137ff180 was disconnected and freed. reset controller. 00:23:28.663 [2024-04-26 15:00:22.739526] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:23:28.663 [2024-04-26 15:00:22.739547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.663 [2024-04-26 15:00:22.743490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.663 [2024-04-26 15:00:22.779101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:28.663 [2024-04-26 15:00:22.828278] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:28.663 00:23:28.663 Latency(us) 00:23:28.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:28.663 Verification LBA range: start 0x0 length 0x4000 00:23:28.663 NVMe0n1 : 15.01 8083.56 31.58 231.59 0.00 15360.07 855.61 1056343.23 00:23:28.663 =================================================================================================================== 00:23:28.663 Total : 8083.56 31.58 231.59 0.00 15360.07 855.61 1056343.23 00:23:28.663 Received shutdown signal, test time was about 15.000000 seconds 00:23:28.663 00:23:28.663 Latency(us) 00:23:28.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.663 =================================================================================================================== 00:23:28.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.663 15:00:28 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:28.663 15:00:28 -- host/failover.sh@65 -- # count=3 00:23:28.663 15:00:28 -- host/failover.sh@67 -- # (( count != 3 )) 00:23:28.663 15:00:28 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:28.663 15:00:28 -- host/failover.sh@73 -- # bdevperf_pid=292302 00:23:28.663 15:00:28 -- host/failover.sh@75 -- # waitforlisten 292302 /var/tmp/bdevperf.sock 00:23:28.663 15:00:28 -- common/autotest_common.sh@817 -- # '[' -z 292302 ']' 00:23:28.663 15:00:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.663 15:00:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:28.663 15:00:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.663 15:00:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:28.663 15:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:29.606 15:00:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:29.606 15:00:29 -- common/autotest_common.sh@850 -- # return 0 00:23:29.606 15:00:29 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:29.862 [2024-04-26 15:00:29.897573] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:29.863 15:00:29 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:30.120 [2024-04-26 15:00:30.146502] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:30.120 15:00:30 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:30.688 NVMe0n1 00:23:30.688 15:00:30 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:30.946 00:23:30.946 15:00:30 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.203 00:23:31.203 15:00:31 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.203 15:00:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:23:31.460 15:00:31 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.718 15:00:31 -- host/failover.sh@87 -- # sleep 3 00:23:35.005 15:00:34 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.005 15:00:34 -- host/failover.sh@88 -- # grep -q NVMe0 00:23:35.005 15:00:34 -- host/failover.sh@90 -- # run_test_pid=293064 00:23:35.005 15:00:34 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.005 15:00:34 -- host/failover.sh@92 -- # wait 293064 00:23:36.375 0 00:23:36.375 15:00:36 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.375 [2024-04-26 15:00:28.740778] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:36.375 [2024-04-26 15:00:28.740939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292302 ] 00:23:36.375 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.375 [2024-04-26 15:00:28.865338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.375 [2024-04-26 15:00:29.092353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.375 [2024-04-26 15:00:31.594052] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:36.375 [2024-04-26 15:00:31.594681] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.375 [2024-04-26 15:00:31.594788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.375 [2024-04-26 15:00:31.638063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:36.375 [2024-04-26 15:00:31.655908] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.375 Running I/O for 1 seconds... 00:23:36.375 00:23:36.375 Latency(us) 00:23:36.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.375 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:36.375 Verification LBA range: start 0x0 length 0x4000 00:23:36.375 NVMe0n1 : 1.01 10313.62 40.29 0.00 0.00 12328.55 242.73 16408.27 00:23:36.375 =================================================================================================================== 00:23:36.375 Total : 10313.62 40.29 0.00 0.00 12328.55 242.73 16408.27 00:23:36.375 15:00:36 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.375 15:00:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:23:36.375 15:00:36 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.631 15:00:36 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.631 15:00:36 -- host/failover.sh@99 -- # grep -q NVMe0 00:23:36.888 15:00:36 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.145 15:00:37 -- host/failover.sh@101 -- # sleep 3 00:23:40.427 15:00:40 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.427 15:00:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:23:40.427 15:00:40 -- host/failover.sh@108 -- # killprocess 292302 00:23:40.427 15:00:40 -- common/autotest_common.sh@936 -- # '[' -z 292302 ']' 00:23:40.427 15:00:40 -- common/autotest_common.sh@940 -- # kill -0 292302 00:23:40.427 15:00:40 -- common/autotest_common.sh@941 -- # uname 00:23:40.427 15:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.427 15:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 292302 00:23:40.427 15:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:40.427 15:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:40.427 15:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 292302' 00:23:40.427 killing process with pid 292302 00:23:40.427 15:00:40 -- common/autotest_common.sh@955 -- # kill 292302 00:23:40.427 15:00:40 -- common/autotest_common.sh@960 -- # wait 292302 00:23:41.364 15:00:41 -- host/failover.sh@110 -- # sync 00:23:41.364 15:00:41 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.623 15:00:41 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:41.623 15:00:41 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.623 15:00:41 -- host/failover.sh@116 -- # nvmftestfini 00:23:41.623 15:00:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:41.623 15:00:41 -- nvmf/common.sh@117 -- # sync 00:23:41.623 15:00:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:41.623 15:00:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:41.623 15:00:41 -- nvmf/common.sh@120 -- # set +e 00:23:41.623 15:00:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.623 15:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:41.623 rmmod nvme_rdma 00:23:41.623 rmmod nvme_fabrics 00:23:41.623 15:00:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.623 15:00:41 -- nvmf/common.sh@124 -- # set -e 00:23:41.623 15:00:41 -- nvmf/common.sh@125 -- # return 0 00:23:41.623 15:00:41 -- nvmf/common.sh@478 -- # '[' -n 289383 ']' 00:23:41.623 15:00:41 -- nvmf/common.sh@479 -- # killprocess 289383 00:23:41.623 15:00:41 -- common/autotest_common.sh@936 -- # '[' -z 289383 ']' 00:23:41.623 15:00:41 -- common/autotest_common.sh@940 -- # kill -0 289383 00:23:41.623 15:00:41 -- common/autotest_common.sh@941 -- # uname 00:23:41.623 15:00:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.623 15:00:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 289383 00:23:41.623 15:00:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:41.623 15:00:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:41.623 15:00:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 289383' 00:23:41.623 killing process with pid 289383 00:23:41.623 15:00:41 -- common/autotest_common.sh@955 -- # kill 289383 00:23:41.623 15:00:41 -- common/autotest_common.sh@960 -- # wait 289383 00:23:42.190 [2024-04-26 15:00:42.067460] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:43.570 15:00:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:43.570 15:00:43 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:43.570 00:23:43.570 real 0m37.867s 00:23:43.570 user 2m20.817s 00:23:43.570 sys 0m4.432s 00:23:43.570 15:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.570 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:43.570 ************************************ 00:23:43.570 END TEST nvmf_failover 00:23:43.570 ************************************ 00:23:43.570 15:00:43 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:43.570 15:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.570 15:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.570 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:43.570 ************************************ 00:23:43.570 START TEST nvmf_discovery 00:23:43.570 ************************************ 00:23:43.570 15:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:43.570 * Looking for test storage... 00:23:43.570 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:43.570 15:00:43 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.570 15:00:43 -- nvmf/common.sh@7 -- # uname -s 00:23:43.570 15:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.570 15:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.570 15:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.570 15:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.570 15:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.570 15:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.570 15:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.570 15:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.570 15:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.570 15:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.570 15:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:43.570 15:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:43.570 15:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.570 15:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.570 15:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.570 15:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.570 15:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:43.570 15:00:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.570 15:00:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.570 15:00:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.570 15:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.570 15:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.570 15:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.570 15:00:43 -- paths/export.sh@5 -- # export PATH 00:23:43.570 15:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.570 15:00:43 -- nvmf/common.sh@47 -- # : 0 00:23:43.570 15:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.570 15:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.570 15:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.570 15:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.570 15:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.570 15:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.570 15:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.570 15:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.828 15:00:43 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:23:43.828 15:00:43 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:43.828 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:43.828 15:00:43 -- host/discovery.sh@13 -- # exit 0 00:23:43.828 00:23:43.828 real 0m0.070s 00:23:43.828 user 0m0.029s 00:23:43.828 sys 0m0.046s 00:23:43.828 15:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.828 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:43.828 ************************************ 00:23:43.828 END TEST nvmf_discovery 00:23:43.828 ************************************ 00:23:43.828 15:00:43 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:43.828 15:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.828 15:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.828 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:43.828 ************************************ 00:23:43.828 START TEST nvmf_discovery_remove_ifc 00:23:43.828 ************************************ 00:23:43.828 15:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:23:43.828 * Looking for test storage... 00:23:43.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:43.828 15:00:43 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.828 15:00:43 -- nvmf/common.sh@7 -- # uname -s 00:23:43.828 15:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.829 15:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.829 15:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.829 15:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.829 15:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.829 15:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.829 15:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.829 15:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.829 15:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.829 15:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.829 15:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:43.829 15:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:43.829 15:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.829 15:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.829 15:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.829 15:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.829 15:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:43.829 15:00:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.829 15:00:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.829 15:00:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.829 15:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.829 15:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.829 15:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.829 15:00:43 -- paths/export.sh@5 -- # export PATH 00:23:43.829 15:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.829 15:00:43 -- nvmf/common.sh@47 -- # : 0 00:23:43.829 15:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.829 15:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.829 15:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.829 15:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.829 15:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.829 15:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.829 15:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.829 15:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.829 15:00:43 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:23:43.829 15:00:43 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:43.829 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:43.829 15:00:43 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:23:43.829 00:23:43.829 real 0m0.068s 00:23:43.829 user 0m0.032s 00:23:43.829 sys 0m0.041s 00:23:43.829 15:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.829 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:43.829 ************************************ 00:23:43.829 END TEST nvmf_discovery_remove_ifc 00:23:43.829 ************************************ 00:23:43.829 15:00:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:43.829 15:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.829 15:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.829 15:00:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.087 ************************************ 00:23:44.087 START TEST nvmf_identify_kernel_target 00:23:44.087 ************************************ 00:23:44.087 15:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:23:44.087 * Looking for test storage... 00:23:44.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:44.087 15:00:43 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.087 15:00:44 -- nvmf/common.sh@7 -- # uname -s 00:23:44.087 15:00:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.087 15:00:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.087 15:00:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.087 15:00:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.087 15:00:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.087 15:00:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.087 15:00:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.087 15:00:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.087 15:00:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.087 15:00:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.087 15:00:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:44.087 15:00:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:44.087 15:00:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.087 15:00:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.087 15:00:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.087 15:00:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.087 15:00:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:44.087 15:00:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.087 15:00:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.087 15:00:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.087 15:00:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.087 15:00:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.087 15:00:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.087 15:00:44 -- paths/export.sh@5 -- # export PATH 00:23:44.087 15:00:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.087 15:00:44 -- nvmf/common.sh@47 -- # : 0 00:23:44.087 15:00:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.087 15:00:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.087 15:00:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.087 15:00:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.087 15:00:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.087 15:00:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.087 15:00:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.087 15:00:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.087 15:00:44 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:44.087 15:00:44 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:44.087 15:00:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.087 15:00:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:44.087 15:00:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:44.087 15:00:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:44.087 15:00:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.087 15:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.087 15:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.087 15:00:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:44.087 15:00:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:44.087 15:00:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.087 15:00:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.987 15:00:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:45.987 15:00:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.987 15:00:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.987 15:00:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.987 15:00:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.987 15:00:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.987 15:00:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.987 15:00:45 -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.987 15:00:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.987 15:00:45 -- nvmf/common.sh@296 -- # e810=() 00:23:45.987 15:00:45 -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.987 15:00:45 -- nvmf/common.sh@297 -- # x722=() 00:23:45.987 15:00:45 -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.987 15:00:45 -- nvmf/common.sh@298 -- # mlx=() 00:23:45.987 15:00:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.987 15:00:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.987 15:00:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.987 15:00:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:45.987 15:00:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:45.987 15:00:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:45.987 15:00:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:45.987 15:00:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:45.987 15:00:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.987 15:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.987 15:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:23:45.987 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:23:45.987 15:00:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:45.987 15:00:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:45.987 15:00:45 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:45.988 15:00:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.988 15:00:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:23:45.988 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:23:45.988 15:00:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:45.988 15:00:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.988 15:00:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.988 15:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.988 15:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:45.988 15:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.988 15:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:23:45.988 Found net devices under 0000:09:00.0: mlx_0_0 00:23:45.988 15:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.988 15:00:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.988 15:00:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.988 15:00:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:45.988 15:00:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.988 15:00:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:23:45.988 Found net devices under 0000:09:00.1: mlx_0_1 00:23:45.988 15:00:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.988 15:00:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:45.988 15:00:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:45.988 15:00:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:45.988 15:00:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:45.988 15:00:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:45.988 15:00:45 -- nvmf/common.sh@58 -- # uname 00:23:45.988 15:00:45 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:45.988 15:00:45 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:45.988 15:00:45 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:45.988 15:00:45 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:45.988 15:00:45 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:45.988 15:00:45 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:45.988 15:00:45 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:45.988 15:00:45 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:45.988 15:00:45 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:45.988 15:00:45 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:45.988 15:00:45 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:45.988 15:00:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:45.988 15:00:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:45.988 15:00:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:45.988 15:00:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:45.988 15:00:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:45.988 15:00:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@105 -- # continue 2 00:23:45.988 15:00:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@105 -- # continue 2 00:23:45.988 15:00:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:45.988 15:00:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:45.988 15:00:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:45.988 15:00:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:45.988 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:45.988 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:23:45.988 altname enp9s0f0np0 00:23:45.988 inet 192.168.100.8/24 scope global mlx_0_0 00:23:45.988 valid_lft forever preferred_lft forever 00:23:45.988 15:00:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:45.988 15:00:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:45.988 15:00:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:45.988 15:00:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:45.988 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:45.988 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:23:45.988 altname enp9s0f1np1 00:23:45.988 inet 192.168.100.9/24 scope global mlx_0_1 00:23:45.988 valid_lft forever preferred_lft forever 00:23:45.988 15:00:46 -- nvmf/common.sh@411 -- # return 0 00:23:45.988 15:00:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:45.988 15:00:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:45.988 15:00:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:45.988 15:00:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:45.988 15:00:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:45.988 15:00:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:45.988 15:00:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:45.988 15:00:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:45.988 15:00:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:45.988 15:00:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@105 -- # continue 2 00:23:45.988 15:00:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:45.988 15:00:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:45.988 15:00:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@105 -- # continue 2 00:23:45.988 15:00:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:45.988 15:00:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:45.988 15:00:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:45.988 15:00:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:45.988 15:00:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:45.988 15:00:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:45.988 192.168.100.9' 00:23:45.988 15:00:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:45.988 192.168.100.9' 00:23:45.988 15:00:46 -- nvmf/common.sh@446 -- # head -n 1 00:23:45.988 15:00:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:45.988 15:00:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:45.988 192.168.100.9' 00:23:45.988 15:00:46 -- nvmf/common.sh@447 -- # tail -n +2 00:23:45.988 15:00:46 -- nvmf/common.sh@447 -- # head -n 1 00:23:45.988 15:00:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:45.988 15:00:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:45.988 15:00:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:45.988 15:00:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:45.988 15:00:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:45.988 15:00:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:46.247 15:00:46 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:46.247 15:00:46 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:46.247 15:00:46 -- nvmf/common.sh@717 -- # local ip 00:23:46.247 15:00:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:46.247 15:00:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:46.247 15:00:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.247 15:00:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.247 15:00:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:46.247 15:00:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:46.247 15:00:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:46.247 15:00:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:46.247 15:00:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:46.247 15:00:46 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:23:46.247 15:00:46 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:23:46.247 15:00:46 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:23:46.247 15:00:46 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:46.247 15:00:46 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:46.247 15:00:46 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:46.247 15:00:46 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:46.247 15:00:46 -- nvmf/common.sh@628 -- # local block nvme 00:23:46.247 15:00:46 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:46.247 15:00:46 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:46.247 15:00:46 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:46.247 15:00:46 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:47.182 Waiting for block devices as requested 00:23:47.182 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.441 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.441 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.441 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.441 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.701 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.701 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.701 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:47.701 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:47.701 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.960 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.960 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.960 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:48.218 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:48.218 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:48.218 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:48.218 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:48.476 15:00:48 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:48.476 15:00:48 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:48.476 15:00:48 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:48.476 15:00:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:48.476 15:00:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:48.476 15:00:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:48.476 15:00:48 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:48.476 15:00:48 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:48.476 15:00:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:48.476 No valid GPT data, bailing 00:23:48.476 15:00:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:48.476 15:00:48 -- scripts/common.sh@391 -- # pt= 00:23:48.476 15:00:48 -- scripts/common.sh@392 -- # return 1 00:23:48.476 15:00:48 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:48.476 15:00:48 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:48.476 15:00:48 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:48.476 15:00:48 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:48.476 15:00:48 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:48.476 15:00:48 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:48.476 15:00:48 -- nvmf/common.sh@656 -- # echo 1 00:23:48.476 15:00:48 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:48.476 15:00:48 -- nvmf/common.sh@658 -- # echo 1 00:23:48.476 15:00:48 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:23:48.476 15:00:48 -- nvmf/common.sh@661 -- # echo rdma 00:23:48.476 15:00:48 -- nvmf/common.sh@662 -- # echo 4420 00:23:48.476 15:00:48 -- nvmf/common.sh@663 -- # echo ipv4 00:23:48.476 15:00:48 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:48.476 15:00:48 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 192.168.100.8 -t rdma -s 4420 00:23:48.476 00:23:48.476 Discovery Log Number of Records 2, Generation counter 2 00:23:48.476 =====Discovery Log Entry 0====== 00:23:48.476 trtype: rdma 00:23:48.476 adrfam: ipv4 00:23:48.476 subtype: current discovery subsystem 00:23:48.476 treq: not specified, sq flow control disable supported 00:23:48.476 portid: 1 00:23:48.476 trsvcid: 4420 00:23:48.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:48.476 traddr: 192.168.100.8 00:23:48.476 eflags: none 00:23:48.476 rdma_prtype: not specified 00:23:48.476 rdma_qptype: connected 00:23:48.476 rdma_cms: rdma-cm 00:23:48.476 rdma_pkey: 0x0000 00:23:48.476 =====Discovery Log Entry 1====== 00:23:48.476 trtype: rdma 00:23:48.476 adrfam: ipv4 00:23:48.476 subtype: nvme subsystem 00:23:48.476 treq: not specified, sq flow control disable supported 00:23:48.476 portid: 1 00:23:48.476 trsvcid: 4420 00:23:48.476 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:48.476 traddr: 192.168.100.8 00:23:48.476 eflags: none 00:23:48.476 rdma_prtype: not specified 00:23:48.476 rdma_qptype: connected 00:23:48.476 rdma_cms: rdma-cm 00:23:48.476 rdma_pkey: 0x0000 00:23:48.476 15:00:48 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:23:48.476 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:48.738 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.738 ===================================================== 00:23:48.738 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:48.738 ===================================================== 00:23:48.738 Controller Capabilities/Features 00:23:48.738 ================================ 00:23:48.738 Vendor ID: 0000 00:23:48.738 Subsystem Vendor ID: 0000 00:23:48.738 Serial Number: bd2c3ae04bac915ea057 00:23:48.738 Model Number: Linux 00:23:48.738 Firmware Version: 6.7.0-68 00:23:48.738 Recommended Arb Burst: 0 00:23:48.738 IEEE OUI Identifier: 00 00 00 00:23:48.738 Multi-path I/O 00:23:48.738 May have multiple subsystem ports: No 00:23:48.738 May have multiple controllers: No 00:23:48.738 Associated with SR-IOV VF: No 00:23:48.738 Max Data Transfer Size: Unlimited 00:23:48.738 Max Number of Namespaces: 0 00:23:48.738 Max Number of I/O Queues: 1024 00:23:48.738 NVMe Specification Version (VS): 1.3 00:23:48.738 NVMe Specification Version (Identify): 1.3 00:23:48.738 Maximum Queue Entries: 128 00:23:48.738 Contiguous Queues Required: No 00:23:48.738 Arbitration Mechanisms Supported 00:23:48.738 Weighted Round Robin: Not Supported 00:23:48.738 Vendor Specific: Not Supported 00:23:48.738 Reset Timeout: 7500 ms 00:23:48.738 Doorbell Stride: 4 bytes 00:23:48.738 NVM Subsystem Reset: Not Supported 00:23:48.738 Command Sets Supported 00:23:48.738 NVM Command Set: Supported 00:23:48.738 Boot Partition: Not Supported 00:23:48.738 Memory Page Size Minimum: 4096 bytes 00:23:48.738 Memory Page Size Maximum: 4096 bytes 00:23:48.738 Persistent Memory Region: Not Supported 00:23:48.738 Optional Asynchronous Events Supported 00:23:48.738 Namespace Attribute Notices: Not Supported 00:23:48.738 Firmware Activation Notices: Not Supported 00:23:48.738 ANA Change Notices: Not Supported 00:23:48.738 PLE Aggregate Log Change Notices: Not Supported 00:23:48.738 LBA Status Info Alert Notices: Not Supported 00:23:48.738 EGE Aggregate Log Change Notices: Not Supported 00:23:48.738 Normal NVM Subsystem Shutdown event: Not Supported 00:23:48.738 Zone Descriptor Change Notices: Not Supported 00:23:48.738 Discovery Log Change Notices: Supported 00:23:48.738 Controller Attributes 00:23:48.738 128-bit Host Identifier: Not Supported 00:23:48.738 Non-Operational Permissive Mode: Not Supported 00:23:48.738 NVM Sets: Not Supported 00:23:48.738 Read Recovery Levels: Not Supported 00:23:48.738 Endurance Groups: Not Supported 00:23:48.738 Predictable Latency Mode: Not Supported 00:23:48.738 Traffic Based Keep ALive: Not Supported 00:23:48.738 Namespace Granularity: Not Supported 00:23:48.738 SQ Associations: Not Supported 00:23:48.738 UUID List: Not Supported 00:23:48.738 Multi-Domain Subsystem: Not Supported 00:23:48.738 Fixed Capacity Management: Not Supported 00:23:48.738 Variable Capacity Management: Not Supported 00:23:48.738 Delete Endurance Group: Not Supported 00:23:48.739 Delete NVM Set: Not Supported 00:23:48.739 Extended LBA Formats Supported: Not Supported 00:23:48.739 Flexible Data Placement Supported: Not Supported 00:23:48.739 00:23:48.739 Controller Memory Buffer Support 00:23:48.739 ================================ 00:23:48.739 Supported: No 00:23:48.739 00:23:48.739 Persistent Memory Region Support 00:23:48.739 ================================ 00:23:48.739 Supported: No 00:23:48.739 00:23:48.739 Admin Command Set Attributes 00:23:48.739 ============================ 00:23:48.739 Security Send/Receive: Not Supported 00:23:48.739 Format NVM: Not Supported 00:23:48.739 Firmware Activate/Download: Not Supported 00:23:48.739 Namespace Management: Not Supported 00:23:48.739 Device Self-Test: Not Supported 00:23:48.739 Directives: Not Supported 00:23:48.739 NVMe-MI: Not Supported 00:23:48.739 Virtualization Management: Not Supported 00:23:48.739 Doorbell Buffer Config: Not Supported 00:23:48.739 Get LBA Status Capability: Not Supported 00:23:48.739 Command & Feature Lockdown Capability: Not Supported 00:23:48.739 Abort Command Limit: 1 00:23:48.739 Async Event Request Limit: 1 00:23:48.739 Number of Firmware Slots: N/A 00:23:48.739 Firmware Slot 1 Read-Only: N/A 00:23:48.739 Firmware Activation Without Reset: N/A 00:23:48.739 Multiple Update Detection Support: N/A 00:23:48.739 Firmware Update Granularity: No Information Provided 00:23:48.739 Per-Namespace SMART Log: No 00:23:48.739 Asymmetric Namespace Access Log Page: Not Supported 00:23:48.739 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:48.739 Command Effects Log Page: Not Supported 00:23:48.739 Get Log Page Extended Data: Supported 00:23:48.739 Telemetry Log Pages: Not Supported 00:23:48.739 Persistent Event Log Pages: Not Supported 00:23:48.739 Supported Log Pages Log Page: May Support 00:23:48.739 Commands Supported & Effects Log Page: Not Supported 00:23:48.739 Feature Identifiers & Effects Log Page:May Support 00:23:48.739 NVMe-MI Commands & Effects Log Page: May Support 00:23:48.739 Data Area 4 for Telemetry Log: Not Supported 00:23:48.739 Error Log Page Entries Supported: 1 00:23:48.739 Keep Alive: Not Supported 00:23:48.739 00:23:48.739 NVM Command Set Attributes 00:23:48.739 ========================== 00:23:48.739 Submission Queue Entry Size 00:23:48.739 Max: 1 00:23:48.739 Min: 1 00:23:48.739 Completion Queue Entry Size 00:23:48.739 Max: 1 00:23:48.739 Min: 1 00:23:48.739 Number of Namespaces: 0 00:23:48.739 Compare Command: Not Supported 00:23:48.739 Write Uncorrectable Command: Not Supported 00:23:48.739 Dataset Management Command: Not Supported 00:23:48.739 Write Zeroes Command: Not Supported 00:23:48.739 Set Features Save Field: Not Supported 00:23:48.739 Reservations: Not Supported 00:23:48.739 Timestamp: Not Supported 00:23:48.739 Copy: Not Supported 00:23:48.739 Volatile Write Cache: Not Present 00:23:48.739 Atomic Write Unit (Normal): 1 00:23:48.739 Atomic Write Unit (PFail): 1 00:23:48.739 Atomic Compare & Write Unit: 1 00:23:48.739 Fused Compare & Write: Not Supported 00:23:48.739 Scatter-Gather List 00:23:48.739 SGL Command Set: Supported 00:23:48.740 SGL Keyed: Supported 00:23:48.740 SGL Bit Bucket Descriptor: Not Supported 00:23:48.740 SGL Metadata Pointer: Not Supported 00:23:48.740 Oversized SGL: Not Supported 00:23:48.740 SGL Metadata Address: Not Supported 00:23:48.740 SGL Offset: Supported 00:23:48.740 Transport SGL Data Block: Not Supported 00:23:48.740 Replay Protected Memory Block: Not Supported 00:23:48.740 00:23:48.740 Firmware Slot Information 00:23:48.740 ========================= 00:23:48.740 Active slot: 0 00:23:48.740 00:23:48.740 00:23:48.740 Error Log 00:23:48.740 ========= 00:23:48.740 00:23:48.740 Active Namespaces 00:23:48.740 ================= 00:23:48.740 Discovery Log Page 00:23:48.740 ================== 00:23:48.740 Generation Counter: 2 00:23:48.740 Number of Records: 2 00:23:48.740 Record Format: 0 00:23:48.740 00:23:48.740 Discovery Log Entry 0 00:23:48.740 ---------------------- 00:23:48.740 Transport Type: 1 (RDMA) 00:23:48.740 Address Family: 1 (IPv4) 00:23:48.740 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:48.740 Entry Flags: 00:23:48.740 Duplicate Returned Information: 0 00:23:48.740 Explicit Persistent Connection Support for Discovery: 0 00:23:48.740 Transport Requirements: 00:23:48.740 Secure Channel: Not Specified 00:23:48.740 Port ID: 1 (0x0001) 00:23:48.740 Controller ID: 65535 (0xffff) 00:23:48.740 Admin Max SQ Size: 32 00:23:48.740 Transport Service Identifier: 4420 00:23:48.740 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:48.740 Transport Address: 192.168.100.8 00:23:48.740 Transport Specific Address Subtype - RDMA 00:23:48.740 RDMA QP Service Type: 1 (Reliable Connected) 00:23:48.740 RDMA Provider Type: 1 (No provider specified) 00:23:48.740 RDMA CM Service: 1 (RDMA_CM) 00:23:48.740 Discovery Log Entry 1 00:23:48.740 ---------------------- 00:23:48.740 Transport Type: 1 (RDMA) 00:23:48.740 Address Family: 1 (IPv4) 00:23:48.740 Subsystem Type: 2 (NVM Subsystem) 00:23:48.740 Entry Flags: 00:23:48.740 Duplicate Returned Information: 0 00:23:48.740 Explicit Persistent Connection Support for Discovery: 0 00:23:48.740 Transport Requirements: 00:23:48.740 Secure Channel: Not Specified 00:23:48.740 Port ID: 1 (0x0001) 00:23:48.740 Controller ID: 65535 (0xffff) 00:23:48.740 Admin Max SQ Size: 32 00:23:48.740 Transport Service Identifier: 4420 00:23:48.740 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:48.740 Transport Address: 192.168.100.8 00:23:48.740 Transport Specific Address Subtype - RDMA 00:23:48.740 RDMA QP Service Type: 1 (Reliable Connected) 00:23:48.740 RDMA Provider Type: 1 (No provider specified) 00:23:48.740 RDMA CM Service: 1 (RDMA_CM) 00:23:48.740 15:00:48 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:48.740 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.002 get_feature(0x01) failed 00:23:49.002 get_feature(0x02) failed 00:23:49.002 get_feature(0x04) failed 00:23:49.002 ===================================================== 00:23:49.002 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.002 ===================================================== 00:23:49.002 Controller Capabilities/Features 00:23:49.002 ================================ 00:23:49.002 Vendor ID: 0000 00:23:49.002 Subsystem Vendor ID: 0000 00:23:49.002 Serial Number: 313dbddf6065e2115395 00:23:49.002 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:49.002 Firmware Version: 6.7.0-68 00:23:49.002 Recommended Arb Burst: 6 00:23:49.002 IEEE OUI Identifier: 00 00 00 00:23:49.002 Multi-path I/O 00:23:49.002 May have multiple subsystem ports: Yes 00:23:49.002 May have multiple controllers: Yes 00:23:49.002 Associated with SR-IOV VF: No 00:23:49.002 Max Data Transfer Size: 1048576 00:23:49.002 Max Number of Namespaces: 1024 00:23:49.002 Max Number of I/O Queues: 128 00:23:49.002 NVMe Specification Version (VS): 1.3 00:23:49.002 NVMe Specification Version (Identify): 1.3 00:23:49.002 Maximum Queue Entries: 128 00:23:49.002 Contiguous Queues Required: No 00:23:49.002 Arbitration Mechanisms Supported 00:23:49.002 Weighted Round Robin: Not Supported 00:23:49.002 Vendor Specific: Not Supported 00:23:49.002 Reset Timeout: 7500 ms 00:23:49.002 Doorbell Stride: 4 bytes 00:23:49.002 NVM Subsystem Reset: Not Supported 00:23:49.002 Command Sets Supported 00:23:49.002 NVM Command Set: Supported 00:23:49.002 Boot Partition: Not Supported 00:23:49.002 Memory Page Size Minimum: 4096 bytes 00:23:49.002 Memory Page Size Maximum: 4096 bytes 00:23:49.002 Persistent Memory Region: Not Supported 00:23:49.002 Optional Asynchronous Events Supported 00:23:49.002 Namespace Attribute Notices: Supported 00:23:49.002 Firmware Activation Notices: Not Supported 00:23:49.002 ANA Change Notices: Supported 00:23:49.002 PLE Aggregate Log Change Notices: Not Supported 00:23:49.002 LBA Status Info Alert Notices: Not Supported 00:23:49.002 EGE Aggregate Log Change Notices: Not Supported 00:23:49.002 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.002 Zone Descriptor Change Notices: Not Supported 00:23:49.002 Discovery Log Change Notices: Not Supported 00:23:49.002 Controller Attributes 00:23:49.002 128-bit Host Identifier: Supported 00:23:49.002 Non-Operational Permissive Mode: Not Supported 00:23:49.002 NVM Sets: Not Supported 00:23:49.002 Read Recovery Levels: Not Supported 00:23:49.002 Endurance Groups: Not Supported 00:23:49.002 Predictable Latency Mode: Not Supported 00:23:49.002 Traffic Based Keep ALive: Supported 00:23:49.002 Namespace Granularity: Not Supported 00:23:49.002 SQ Associations: Not Supported 00:23:49.002 UUID List: Not Supported 00:23:49.002 Multi-Domain Subsystem: Not Supported 00:23:49.002 Fixed Capacity Management: Not Supported 00:23:49.002 Variable Capacity Management: Not Supported 00:23:49.002 Delete Endurance Group: Not Supported 00:23:49.002 Delete NVM Set: Not Supported 00:23:49.002 Extended LBA Formats Supported: Not Supported 00:23:49.002 Flexible Data Placement Supported: Not Supported 00:23:49.002 00:23:49.002 Controller Memory Buffer Support 00:23:49.002 ================================ 00:23:49.002 Supported: No 00:23:49.002 00:23:49.002 Persistent Memory Region Support 00:23:49.002 ================================ 00:23:49.002 Supported: No 00:23:49.002 00:23:49.002 Admin Command Set Attributes 00:23:49.002 ============================ 00:23:49.002 Security Send/Receive: Not Supported 00:23:49.002 Format NVM: Not Supported 00:23:49.002 Firmware Activate/Download: Not Supported 00:23:49.002 Namespace Management: Not Supported 00:23:49.002 Device Self-Test: Not Supported 00:23:49.002 Directives: Not Supported 00:23:49.002 NVMe-MI: Not Supported 00:23:49.002 Virtualization Management: Not Supported 00:23:49.002 Doorbell Buffer Config: Not Supported 00:23:49.002 Get LBA Status Capability: Not Supported 00:23:49.002 Command & Feature Lockdown Capability: Not Supported 00:23:49.002 Abort Command Limit: 4 00:23:49.002 Async Event Request Limit: 4 00:23:49.002 Number of Firmware Slots: N/A 00:23:49.002 Firmware Slot 1 Read-Only: N/A 00:23:49.002 Firmware Activation Without Reset: N/A 00:23:49.002 Multiple Update Detection Support: N/A 00:23:49.002 Firmware Update Granularity: No Information Provided 00:23:49.002 Per-Namespace SMART Log: Yes 00:23:49.002 Asymmetric Namespace Access Log Page: Supported 00:23:49.002 ANA Transition Time : 10 sec 00:23:49.002 00:23:49.002 Asymmetric Namespace Access Capabilities 00:23:49.002 ANA Optimized State : Supported 00:23:49.002 ANA Non-Optimized State : Supported 00:23:49.002 ANA Inaccessible State : Supported 00:23:49.002 ANA Persistent Loss State : Supported 00:23:49.002 ANA Change State : Supported 00:23:49.003 ANAGRPID is not changed : No 00:23:49.003 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:49.003 00:23:49.003 ANA Group Identifier Maximum : 128 00:23:49.003 Number of ANA Group Identifiers : 128 00:23:49.003 Max Number of Allowed Namespaces : 1024 00:23:49.003 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:49.003 Command Effects Log Page: Supported 00:23:49.003 Get Log Page Extended Data: Supported 00:23:49.003 Telemetry Log Pages: Not Supported 00:23:49.003 Persistent Event Log Pages: Not Supported 00:23:49.003 Supported Log Pages Log Page: May Support 00:23:49.003 Commands Supported & Effects Log Page: Not Supported 00:23:49.003 Feature Identifiers & Effects Log Page:May Support 00:23:49.003 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.003 Data Area 4 for Telemetry Log: Not Supported 00:23:49.003 Error Log Page Entries Supported: 128 00:23:49.003 Keep Alive: Supported 00:23:49.003 Keep Alive Granularity: 1000 ms 00:23:49.003 00:23:49.003 NVM Command Set Attributes 00:23:49.003 ========================== 00:23:49.003 Submission Queue Entry Size 00:23:49.003 Max: 64 00:23:49.003 Min: 64 00:23:49.003 Completion Queue Entry Size 00:23:49.003 Max: 16 00:23:49.003 Min: 16 00:23:49.003 Number of Namespaces: 1024 00:23:49.003 Compare Command: Not Supported 00:23:49.003 Write Uncorrectable Command: Not Supported 00:23:49.003 Dataset Management Command: Supported 00:23:49.003 Write Zeroes Command: Supported 00:23:49.003 Set Features Save Field: Not Supported 00:23:49.003 Reservations: Not Supported 00:23:49.003 Timestamp: Not Supported 00:23:49.003 Copy: Not Supported 00:23:49.003 Volatile Write Cache: Present 00:23:49.003 Atomic Write Unit (Normal): 1 00:23:49.003 Atomic Write Unit (PFail): 1 00:23:49.003 Atomic Compare & Write Unit: 1 00:23:49.003 Fused Compare & Write: Not Supported 00:23:49.003 Scatter-Gather List 00:23:49.003 SGL Command Set: Supported 00:23:49.003 SGL Keyed: Supported 00:23:49.003 SGL Bit Bucket Descriptor: Not Supported 00:23:49.003 SGL Metadata Pointer: Not Supported 00:23:49.003 Oversized SGL: Not Supported 00:23:49.003 SGL Metadata Address: Not Supported 00:23:49.003 SGL Offset: Supported 00:23:49.003 Transport SGL Data Block: Not Supported 00:23:49.003 Replay Protected Memory Block: Not Supported 00:23:49.003 00:23:49.003 Firmware Slot Information 00:23:49.003 ========================= 00:23:49.003 Active slot: 0 00:23:49.003 00:23:49.003 Asymmetric Namespace Access 00:23:49.003 =========================== 00:23:49.003 Change Count : 0 00:23:49.003 Number of ANA Group Descriptors : 1 00:23:49.003 ANA Group Descriptor : 0 00:23:49.003 ANA Group ID : 1 00:23:49.003 Number of NSID Values : 1 00:23:49.003 Change Count : 0 00:23:49.003 ANA State : 1 00:23:49.003 Namespace Identifier : 1 00:23:49.003 00:23:49.003 Commands Supported and Effects 00:23:49.003 ============================== 00:23:49.003 Admin Commands 00:23:49.003 -------------- 00:23:49.003 Get Log Page (02h): Supported 00:23:49.003 Identify (06h): Supported 00:23:49.003 Abort (08h): Supported 00:23:49.003 Set Features (09h): Supported 00:23:49.003 Get Features (0Ah): Supported 00:23:49.003 Asynchronous Event Request (0Ch): Supported 00:23:49.003 Keep Alive (18h): Supported 00:23:49.003 I/O Commands 00:23:49.003 ------------ 00:23:49.003 Flush (00h): Supported 00:23:49.003 Write (01h): Supported LBA-Change 00:23:49.003 Read (02h): Supported 00:23:49.003 Write Zeroes (08h): Supported LBA-Change 00:23:49.003 Dataset Management (09h): Supported 00:23:49.003 00:23:49.003 Error Log 00:23:49.003 ========= 00:23:49.003 Entry: 0 00:23:49.003 Error Count: 0x3 00:23:49.003 Submission Queue Id: 0x0 00:23:49.003 Command Id: 0x5 00:23:49.003 Phase Bit: 0 00:23:49.003 Status Code: 0x2 00:23:49.003 Status Code Type: 0x0 00:23:49.003 Do Not Retry: 1 00:23:49.003 Error Location: 0x28 00:23:49.003 LBA: 0x0 00:23:49.003 Namespace: 0x0 00:23:49.003 Vendor Log Page: 0x0 00:23:49.003 ----------- 00:23:49.003 Entry: 1 00:23:49.003 Error Count: 0x2 00:23:49.003 Submission Queue Id: 0x0 00:23:49.003 Command Id: 0x5 00:23:49.003 Phase Bit: 0 00:23:49.003 Status Code: 0x2 00:23:49.003 Status Code Type: 0x0 00:23:49.003 Do Not Retry: 1 00:23:49.003 Error Location: 0x28 00:23:49.003 LBA: 0x0 00:23:49.003 Namespace: 0x0 00:23:49.003 Vendor Log Page: 0x0 00:23:49.003 ----------- 00:23:49.003 Entry: 2 00:23:49.003 Error Count: 0x1 00:23:49.003 Submission Queue Id: 0x0 00:23:49.003 Command Id: 0x0 00:23:49.003 Phase Bit: 0 00:23:49.003 Status Code: 0x2 00:23:49.003 Status Code Type: 0x0 00:23:49.003 Do Not Retry: 1 00:23:49.003 Error Location: 0x28 00:23:49.003 LBA: 0x0 00:23:49.003 Namespace: 0x0 00:23:49.003 Vendor Log Page: 0x0 00:23:49.003 00:23:49.003 Number of Queues 00:23:49.003 ================ 00:23:49.003 Number of I/O Submission Queues: 128 00:23:49.003 Number of I/O Completion Queues: 128 00:23:49.003 00:23:49.003 ZNS Specific Controller Data 00:23:49.003 ============================ 00:23:49.003 Zone Append Size Limit: 0 00:23:49.003 00:23:49.003 00:23:49.003 Active Namespaces 00:23:49.003 ================= 00:23:49.003 get_feature(0x05) failed 00:23:49.003 Namespace ID:1 00:23:49.003 Command Set Identifier: NVM (00h) 00:23:49.003 Deallocate: Supported 00:23:49.003 Deallocated/Unwritten Error: Not Supported 00:23:49.003 Deallocated Read Value: Unknown 00:23:49.003 Deallocate in Write Zeroes: Not Supported 00:23:49.003 Deallocated Guard Field: 0xFFFF 00:23:49.003 Flush: Supported 00:23:49.003 Reservation: Not Supported 00:23:49.003 Namespace Sharing Capabilities: Multiple Controllers 00:23:49.003 Size (in LBAs): 3907029168 (1863GiB) 00:23:49.003 Capacity (in LBAs): 3907029168 (1863GiB) 00:23:49.003 Utilization (in LBAs): 3907029168 (1863GiB) 00:23:49.003 UUID: 88fcafd4-894d-481d-85b4-2f041a8cd733 00:23:49.003 Thin Provisioning: Not Supported 00:23:49.003 Per-NS Atomic Units: Yes 00:23:49.003 Atomic Boundary Size (Normal): 0 00:23:49.003 Atomic Boundary Size (PFail): 0 00:23:49.003 Atomic Boundary Offset: 0 00:23:49.003 NGUID/EUI64 Never Reused: No 00:23:49.003 ANA group ID: 1 00:23:49.003 Namespace Write Protected: No 00:23:49.003 Number of LBA Formats: 1 00:23:49.003 Current LBA Format: LBA Format #00 00:23:49.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:49.003 00:23:49.003 15:00:48 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:49.003 15:00:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:49.003 15:00:48 -- nvmf/common.sh@117 -- # sync 00:23:49.003 15:00:48 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:49.003 15:00:48 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:49.003 15:00:48 -- nvmf/common.sh@120 -- # set +e 00:23:49.003 15:00:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.003 15:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:49.003 rmmod nvme_rdma 00:23:49.003 rmmod nvme_fabrics 00:23:49.003 15:00:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.003 15:00:48 -- nvmf/common.sh@124 -- # set -e 00:23:49.003 15:00:48 -- nvmf/common.sh@125 -- # return 0 00:23:49.003 15:00:48 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:49.003 15:00:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:49.003 15:00:48 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:49.003 15:00:48 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:49.003 15:00:48 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:49.003 15:00:48 -- nvmf/common.sh@675 -- # echo 0 00:23:49.003 15:00:48 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.003 15:00:48 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:49.003 15:00:48 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:49.003 15:00:48 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.003 15:00:48 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:49.003 15:00:48 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:23:49.003 15:00:48 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:50.378 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:50.378 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:50.378 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:52.281 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.281 00:23:52.281 real 0m8.070s 00:23:52.281 user 0m2.048s 00:23:52.281 sys 0m3.304s 00:23:52.281 15:00:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.281 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:23:52.281 ************************************ 00:23:52.281 END TEST nvmf_identify_kernel_target 00:23:52.281 ************************************ 00:23:52.281 15:00:52 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:52.281 15:00:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:52.281 15:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.281 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:23:52.281 ************************************ 00:23:52.281 START TEST nvmf_auth 00:23:52.281 ************************************ 00:23:52.281 15:00:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:52.281 * Looking for test storage... 00:23:52.281 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:52.281 15:00:52 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.281 15:00:52 -- nvmf/common.sh@7 -- # uname -s 00:23:52.281 15:00:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.281 15:00:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.281 15:00:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.281 15:00:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.281 15:00:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.281 15:00:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.281 15:00:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.281 15:00:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.281 15:00:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.281 15:00:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.281 15:00:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:23:52.281 15:00:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:23:52.281 15:00:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.281 15:00:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.281 15:00:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.281 15:00:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.281 15:00:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:52.281 15:00:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.281 15:00:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.281 15:00:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.281 15:00:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.282 15:00:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.282 15:00:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.282 15:00:52 -- paths/export.sh@5 -- # export PATH 00:23:52.282 15:00:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.282 15:00:52 -- nvmf/common.sh@47 -- # : 0 00:23:52.282 15:00:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.282 15:00:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.282 15:00:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.282 15:00:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.282 15:00:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.282 15:00:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.282 15:00:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.282 15:00:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.282 15:00:52 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.282 15:00:52 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.282 15:00:52 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.282 15:00:52 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.282 15:00:52 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.282 15:00:52 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.282 15:00:52 -- host/auth.sh@21 -- # keys=() 00:23:52.282 15:00:52 -- host/auth.sh@77 -- # nvmftestinit 00:23:52.282 15:00:52 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:52.282 15:00:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.282 15:00:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:52.282 15:00:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:52.282 15:00:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:52.282 15:00:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.282 15:00:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.282 15:00:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.282 15:00:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:52.282 15:00:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:52.282 15:00:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.282 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:23:54.185 15:00:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:54.185 15:00:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.185 15:00:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.185 15:00:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.185 15:00:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.185 15:00:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.185 15:00:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.185 15:00:54 -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.186 15:00:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.186 15:00:54 -- nvmf/common.sh@296 -- # e810=() 00:23:54.186 15:00:54 -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.186 15:00:54 -- nvmf/common.sh@297 -- # x722=() 00:23:54.186 15:00:54 -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.186 15:00:54 -- nvmf/common.sh@298 -- # mlx=() 00:23:54.186 15:00:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.186 15:00:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.186 15:00:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:23:54.186 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:23:54.186 15:00:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.186 15:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:23:54.186 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:23:54.186 15:00:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.186 15:00:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.186 15:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.186 15:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:23:54.186 Found net devices under 0000:09:00.0: mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.186 15:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.186 15:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:23:54.186 Found net devices under 0000:09:00.1: mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.186 15:00:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:54.186 15:00:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:54.186 15:00:54 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:54.186 15:00:54 -- nvmf/common.sh@58 -- # uname 00:23:54.186 15:00:54 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:54.186 15:00:54 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:54.186 15:00:54 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:54.186 15:00:54 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:54.186 15:00:54 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:54.186 15:00:54 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:54.186 15:00:54 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:54.186 15:00:54 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:54.186 15:00:54 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:54.186 15:00:54 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:54.186 15:00:54 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:54.186 15:00:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.186 15:00:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:54.186 15:00:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:54.186 15:00:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.186 15:00:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@105 -- # continue 2 00:23:54.186 15:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@105 -- # continue 2 00:23:54.186 15:00:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:54.186 15:00:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.186 15:00:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:54.186 15:00:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:54.186 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.186 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:23:54.186 altname enp9s0f0np0 00:23:54.186 inet 192.168.100.8/24 scope global mlx_0_0 00:23:54.186 valid_lft forever preferred_lft forever 00:23:54.186 15:00:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:54.186 15:00:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.186 15:00:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:54.186 15:00:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:54.186 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.186 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:23:54.186 altname enp9s0f1np1 00:23:54.186 inet 192.168.100.9/24 scope global mlx_0_1 00:23:54.186 valid_lft forever preferred_lft forever 00:23:54.186 15:00:54 -- nvmf/common.sh@411 -- # return 0 00:23:54.186 15:00:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:54.186 15:00:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:54.186 15:00:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:54.186 15:00:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:54.186 15:00:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.186 15:00:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:54.186 15:00:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:54.186 15:00:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.186 15:00:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:54.186 15:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@105 -- # continue 2 00:23:54.186 15:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.186 15:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.186 15:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@105 -- # continue 2 00:23:54.186 15:00:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:54.186 15:00:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.186 15:00:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:54.186 15:00:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.186 15:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.186 15:00:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:54.186 192.168.100.9' 00:23:54.186 15:00:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:54.186 192.168.100.9' 00:23:54.186 15:00:54 -- nvmf/common.sh@446 -- # head -n 1 00:23:54.186 15:00:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:54.186 15:00:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:54.186 192.168.100.9' 00:23:54.186 15:00:54 -- nvmf/common.sh@447 -- # tail -n +2 00:23:54.186 15:00:54 -- nvmf/common.sh@447 -- # head -n 1 00:23:54.186 15:00:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:54.186 15:00:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:54.186 15:00:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:54.186 15:00:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:54.186 15:00:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:54.186 15:00:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:54.186 15:00:54 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:23:54.187 15:00:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:54.187 15:00:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:54.187 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:23:54.187 15:00:54 -- nvmf/common.sh@470 -- # nvmfpid=299302 00:23:54.187 15:00:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:54.187 15:00:54 -- nvmf/common.sh@471 -- # waitforlisten 299302 00:23:54.187 15:00:54 -- common/autotest_common.sh@817 -- # '[' -z 299302 ']' 00:23:54.187 15:00:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.187 15:00:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:54.187 15:00:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.187 15:00:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:54.187 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.563 15:00:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:55.563 15:00:55 -- common/autotest_common.sh@850 -- # return 0 00:23:55.563 15:00:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:55.563 15:00:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:55.563 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.563 15:00:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.563 15:00:55 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:55.563 15:00:55 -- host/auth.sh@81 -- # gen_key null 32 00:23:55.563 15:00:55 -- host/auth.sh@53 -- # local digest len file key 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # local -A digests 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # digest=null 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # len=32 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # key=07dc52c7d4891cab880fdb463413f22a 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.F0c 00:23:55.563 15:00:55 -- host/auth.sh@59 -- # format_dhchap_key 07dc52c7d4891cab880fdb463413f22a 0 00:23:55.563 15:00:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 07dc52c7d4891cab880fdb463413f22a 0 00:23:55.563 15:00:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # key=07dc52c7d4891cab880fdb463413f22a 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # digest=0 00:23:55.563 15:00:55 -- nvmf/common.sh@694 -- # python - 00:23:55.563 15:00:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.F0c 00:23:55.563 15:00:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.F0c 00:23:55.563 15:00:55 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.F0c 00:23:55.563 15:00:55 -- host/auth.sh@82 -- # gen_key null 48 00:23:55.563 15:00:55 -- host/auth.sh@53 -- # local digest len file key 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # local -A digests 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # digest=null 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # len=48 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # key=7df928755734ddf8f5a6c5b1bffa8d65fa1d7daae794b8cf 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.9QP 00:23:55.563 15:00:55 -- host/auth.sh@59 -- # format_dhchap_key 7df928755734ddf8f5a6c5b1bffa8d65fa1d7daae794b8cf 0 00:23:55.563 15:00:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 7df928755734ddf8f5a6c5b1bffa8d65fa1d7daae794b8cf 0 00:23:55.563 15:00:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # key=7df928755734ddf8f5a6c5b1bffa8d65fa1d7daae794b8cf 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # digest=0 00:23:55.563 15:00:55 -- nvmf/common.sh@694 -- # python - 00:23:55.563 15:00:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.9QP 00:23:55.563 15:00:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.9QP 00:23:55.563 15:00:55 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.9QP 00:23:55.563 15:00:55 -- host/auth.sh@83 -- # gen_key sha256 32 00:23:55.563 15:00:55 -- host/auth.sh@53 -- # local digest len file key 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # local -A digests 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # digest=sha256 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # len=32 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # key=af96989c8e1b768287b05cabcf4473d3 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.563 15:00:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.tvl 00:23:55.563 15:00:55 -- host/auth.sh@59 -- # format_dhchap_key af96989c8e1b768287b05cabcf4473d3 1 00:23:55.563 15:00:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 af96989c8e1b768287b05cabcf4473d3 1 00:23:55.563 15:00:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # key=af96989c8e1b768287b05cabcf4473d3 00:23:55.563 15:00:55 -- nvmf/common.sh@693 -- # digest=1 00:23:55.563 15:00:55 -- nvmf/common.sh@694 -- # python - 00:23:55.563 15:00:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.tvl 00:23:55.563 15:00:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.tvl 00:23:55.563 15:00:55 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.tvl 00:23:55.563 15:00:55 -- host/auth.sh@84 -- # gen_key sha384 48 00:23:55.563 15:00:55 -- host/auth.sh@53 -- # local digest len file key 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.563 15:00:55 -- host/auth.sh@54 -- # local -A digests 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # digest=sha384 00:23:55.563 15:00:55 -- host/auth.sh@56 -- # len=48 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.563 15:00:55 -- host/auth.sh@57 -- # key=082501c08f5b60b7784b3dd33b7e9fff21b58f76f5d9bfaa 00:23:55.564 15:00:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.564 15:00:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.93z 00:23:55.564 15:00:55 -- host/auth.sh@59 -- # format_dhchap_key 082501c08f5b60b7784b3dd33b7e9fff21b58f76f5d9bfaa 2 00:23:55.564 15:00:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 082501c08f5b60b7784b3dd33b7e9fff21b58f76f5d9bfaa 2 00:23:55.564 15:00:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # key=082501c08f5b60b7784b3dd33b7e9fff21b58f76f5d9bfaa 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # digest=2 00:23:55.564 15:00:55 -- nvmf/common.sh@694 -- # python - 00:23:55.564 15:00:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.93z 00:23:55.564 15:00:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.93z 00:23:55.564 15:00:55 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.93z 00:23:55.564 15:00:55 -- host/auth.sh@85 -- # gen_key sha512 64 00:23:55.564 15:00:55 -- host/auth.sh@53 -- # local digest len file key 00:23:55.564 15:00:55 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.564 15:00:55 -- host/auth.sh@54 -- # local -A digests 00:23:55.564 15:00:55 -- host/auth.sh@56 -- # digest=sha512 00:23:55.564 15:00:55 -- host/auth.sh@56 -- # len=64 00:23:55.564 15:00:55 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.564 15:00:55 -- host/auth.sh@57 -- # key=3ffb89b729a27985aae07e198830fe0b2a116d923e8901e00f95bea08c3966c0 00:23:55.564 15:00:55 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.564 15:00:55 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.IqH 00:23:55.564 15:00:55 -- host/auth.sh@59 -- # format_dhchap_key 3ffb89b729a27985aae07e198830fe0b2a116d923e8901e00f95bea08c3966c0 3 00:23:55.564 15:00:55 -- nvmf/common.sh@708 -- # format_key DHHC-1 3ffb89b729a27985aae07e198830fe0b2a116d923e8901e00f95bea08c3966c0 3 00:23:55.564 15:00:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # key=3ffb89b729a27985aae07e198830fe0b2a116d923e8901e00f95bea08c3966c0 00:23:55.564 15:00:55 -- nvmf/common.sh@693 -- # digest=3 00:23:55.564 15:00:55 -- nvmf/common.sh@694 -- # python - 00:23:55.564 15:00:55 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.IqH 00:23:55.564 15:00:55 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.IqH 00:23:55.564 15:00:55 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.IqH 00:23:55.564 15:00:55 -- host/auth.sh@87 -- # waitforlisten 299302 00:23:55.564 15:00:55 -- common/autotest_common.sh@817 -- # '[' -z 299302 ']' 00:23:55.564 15:00:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.564 15:00:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:55.564 15:00:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.564 15:00:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:55.564 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:55.821 15:00:55 -- common/autotest_common.sh@850 -- # return 0 00:23:55.821 15:00:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:55.821 15:00:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.F0c 00:23:55.821 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.821 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.821 15:00:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:55.821 15:00:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9QP 00:23:55.821 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.821 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.821 15:00:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:55.821 15:00:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tvl 00:23:55.821 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.821 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.821 15:00:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:55.821 15:00:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.93z 00:23:55.821 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.821 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.821 15:00:55 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:55.821 15:00:55 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.IqH 00:23:55.821 15:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.821 15:00:55 -- common/autotest_common.sh@10 -- # set +x 00:23:55.821 15:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.821 15:00:55 -- host/auth.sh@92 -- # nvmet_auth_init 00:23:55.821 15:00:55 -- host/auth.sh@35 -- # get_main_ns_ip 00:23:55.821 15:00:55 -- nvmf/common.sh@717 -- # local ip 00:23:55.821 15:00:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:55.821 15:00:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:55.821 15:00:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.821 15:00:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.821 15:00:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:55.821 15:00:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:55.821 15:00:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:55.821 15:00:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:55.821 15:00:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:55.821 15:00:55 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:23:55.821 15:00:55 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:23:55.821 15:00:55 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:55.821 15:00:55 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.821 15:00:55 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:55.821 15:00:55 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:55.821 15:00:55 -- nvmf/common.sh@628 -- # local block nvme 00:23:55.821 15:00:55 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:55.821 15:00:55 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:55.821 15:00:55 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:55.821 15:00:55 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:56.753 Waiting for block devices as requested 00:23:57.012 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:23:57.012 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.012 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.012 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.271 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.271 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.271 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.271 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.530 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:57.530 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.530 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.788 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.788 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.788 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.788 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:58.046 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:58.047 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:58.306 15:00:58 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:58.306 15:00:58 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:58.306 15:00:58 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:58.306 15:00:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:58.306 15:00:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:58.306 15:00:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:58.306 15:00:58 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:58.306 15:00:58 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:58.306 15:00:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:58.564 No valid GPT data, bailing 00:23:58.564 15:00:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:58.564 15:00:58 -- scripts/common.sh@391 -- # pt= 00:23:58.564 15:00:58 -- scripts/common.sh@392 -- # return 1 00:23:58.564 15:00:58 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:58.564 15:00:58 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:58.564 15:00:58 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.564 15:00:58 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:58.564 15:00:58 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:58.564 15:00:58 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:58.564 15:00:58 -- nvmf/common.sh@656 -- # echo 1 00:23:58.564 15:00:58 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:58.564 15:00:58 -- nvmf/common.sh@658 -- # echo 1 00:23:58.564 15:00:58 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:23:58.564 15:00:58 -- nvmf/common.sh@661 -- # echo rdma 00:23:58.564 15:00:58 -- nvmf/common.sh@662 -- # echo 4420 00:23:58.564 15:00:58 -- nvmf/common.sh@663 -- # echo ipv4 00:23:58.564 15:00:58 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:58.564 15:00:58 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd --hostid=8b464f06-2980-e311-ba20-001e67a94acd -a 192.168.100.8 -t rdma -s 4420 00:23:58.564 00:23:58.564 Discovery Log Number of Records 2, Generation counter 2 00:23:58.564 =====Discovery Log Entry 0====== 00:23:58.564 trtype: rdma 00:23:58.564 adrfam: ipv4 00:23:58.564 subtype: current discovery subsystem 00:23:58.564 treq: not specified, sq flow control disable supported 00:23:58.564 portid: 1 00:23:58.564 trsvcid: 4420 00:23:58.564 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:58.564 traddr: 192.168.100.8 00:23:58.564 eflags: none 00:23:58.564 rdma_prtype: not specified 00:23:58.564 rdma_qptype: connected 00:23:58.564 rdma_cms: rdma-cm 00:23:58.564 rdma_pkey: 0x0000 00:23:58.564 =====Discovery Log Entry 1====== 00:23:58.564 trtype: rdma 00:23:58.564 adrfam: ipv4 00:23:58.564 subtype: nvme subsystem 00:23:58.564 treq: not specified, sq flow control disable supported 00:23:58.565 portid: 1 00:23:58.565 trsvcid: 4420 00:23:58.565 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:58.565 traddr: 192.168.100.8 00:23:58.565 eflags: none 00:23:58.565 rdma_prtype: not specified 00:23:58.565 rdma_qptype: connected 00:23:58.565 rdma_cms: rdma-cm 00:23:58.565 rdma_pkey: 0x0000 00:23:58.565 15:00:58 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.565 15:00:58 -- host/auth.sh@37 -- # echo 0 00:23:58.565 15:00:58 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:58.565 15:00:58 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:58.565 15:00:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.565 15:00:58 -- host/auth.sh@44 -- # digest=sha256 00:23:58.565 15:00:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.565 15:00:58 -- host/auth.sh@44 -- # keyid=1 00:23:58.565 15:00:58 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:23:58.565 15:00:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:58.565 15:00:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.565 15:00:58 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:23:58.565 15:00:58 -- host/auth.sh@100 -- # IFS=, 00:23:58.565 15:00:58 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:23:58.565 15:00:58 -- host/auth.sh@100 -- # IFS=, 00:23:58.565 15:00:58 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.565 15:00:58 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:58.565 15:00:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.565 15:00:58 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:23:58.565 15:00:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.565 15:00:58 -- host/auth.sh@68 -- # keyid=1 00:23:58.565 15:00:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.565 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.565 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.565 15:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.565 15:00:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.565 15:00:58 -- nvmf/common.sh@717 -- # local ip 00:23:58.565 15:00:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.565 15:00:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.565 15:00:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.565 15:00:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.565 15:00:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:58.565 15:00:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:58.565 15:00:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:58.565 15:00:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:58.565 15:00:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:58.565 15:00:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:58.565 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.565 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.824 nvme0n1 00:23:58.824 15:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.824 15:00:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.824 15:00:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.824 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.824 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.824 15:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.824 15:00:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.824 15:00:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.824 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.824 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:58.824 15:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.824 15:00:58 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:58.824 15:00:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.824 15:00:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.824 15:00:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:58.824 15:00:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.824 15:00:58 -- host/auth.sh@44 -- # digest=sha256 00:23:58.824 15:00:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.824 15:00:58 -- host/auth.sh@44 -- # keyid=0 00:23:58.824 15:00:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:23:58.824 15:00:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:58.824 15:00:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.824 15:00:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:23:58.824 15:00:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:23:58.824 15:00:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.824 15:00:58 -- host/auth.sh@68 -- # digest=sha256 00:23:58.824 15:00:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.824 15:00:58 -- host/auth.sh@68 -- # keyid=0 00:23:58.824 15:00:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:58.824 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.824 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:59.081 15:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.081 15:00:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.081 15:00:58 -- nvmf/common.sh@717 -- # local ip 00:23:59.081 15:00:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.081 15:00:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.081 15:00:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.081 15:00:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.081 15:00:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:59.081 15:00:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.081 15:00:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.081 15:00:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:59.081 15:00:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:59.081 15:00:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:59.081 15:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.081 15:00:58 -- common/autotest_common.sh@10 -- # set +x 00:23:59.081 nvme0n1 00:23:59.081 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.081 15:00:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.081 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.081 15:00:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.081 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.081 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.081 15:00:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.081 15:00:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.081 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.081 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.338 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.338 15:00:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.338 15:00:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.338 15:00:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.338 15:00:59 -- host/auth.sh@44 -- # digest=sha256 00:23:59.338 15:00:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.338 15:00:59 -- host/auth.sh@44 -- # keyid=1 00:23:59.338 15:00:59 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:23:59.338 15:00:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:59.338 15:00:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:59.338 15:00:59 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:23:59.339 15:00:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:23:59.339 15:00:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.339 15:00:59 -- host/auth.sh@68 -- # digest=sha256 00:23:59.339 15:00:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:59.339 15:00:59 -- host/auth.sh@68 -- # keyid=1 00:23:59.339 15:00:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.339 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.339 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.339 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.339 15:00:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.339 15:00:59 -- nvmf/common.sh@717 -- # local ip 00:23:59.339 15:00:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.339 15:00:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.339 15:00:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.339 15:00:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.339 15:00:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:59.339 15:00:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.339 15:00:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.339 15:00:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:59.339 15:00:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:59.339 15:00:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:59.339 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.339 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.339 nvme0n1 00:23:59.339 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.339 15:00:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.339 15:00:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.339 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.339 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.339 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.339 15:00:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.339 15:00:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.339 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.339 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.601 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.601 15:00:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.601 15:00:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:59.601 15:00:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.601 15:00:59 -- host/auth.sh@44 -- # digest=sha256 00:23:59.601 15:00:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.601 15:00:59 -- host/auth.sh@44 -- # keyid=2 00:23:59.601 15:00:59 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:23:59.601 15:00:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:59.601 15:00:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:59.601 15:00:59 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:23:59.601 15:00:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:23:59.601 15:00:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.601 15:00:59 -- host/auth.sh@68 -- # digest=sha256 00:23:59.601 15:00:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:59.601 15:00:59 -- host/auth.sh@68 -- # keyid=2 00:23:59.601 15:00:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.601 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.601 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.601 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.601 15:00:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.601 15:00:59 -- nvmf/common.sh@717 -- # local ip 00:23:59.601 15:00:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.601 15:00:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.601 15:00:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.601 15:00:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.601 15:00:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:59.601 15:00:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.601 15:00:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.601 15:00:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:59.601 15:00:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:59.601 15:00:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.601 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.601 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.601 nvme0n1 00:23:59.601 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.601 15:00:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.601 15:00:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.601 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.601 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.601 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.602 15:00:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.602 15:00:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.602 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.602 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.861 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.861 15:00:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.861 15:00:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:59.861 15:00:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.861 15:00:59 -- host/auth.sh@44 -- # digest=sha256 00:23:59.861 15:00:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.861 15:00:59 -- host/auth.sh@44 -- # keyid=3 00:23:59.861 15:00:59 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:23:59.861 15:00:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:59.861 15:00:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:59.861 15:00:59 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:23:59.861 15:00:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:23:59.861 15:00:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.861 15:00:59 -- host/auth.sh@68 -- # digest=sha256 00:23:59.861 15:00:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:59.861 15:00:59 -- host/auth.sh@68 -- # keyid=3 00:23:59.861 15:00:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.861 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.861 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.861 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.861 15:00:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.861 15:00:59 -- nvmf/common.sh@717 -- # local ip 00:23:59.861 15:00:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.861 15:00:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.861 15:00:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.861 15:00:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.861 15:00:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:59.861 15:00:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:59.861 15:00:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:59.861 15:00:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:59.861 15:00:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:59.861 15:00:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:59.861 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.861 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.861 nvme0n1 00:23:59.861 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.861 15:00:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.861 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.861 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.861 15:00:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.861 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.119 15:00:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.119 15:00:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.119 15:00:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.119 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:24:00.119 15:00:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.119 15:00:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.119 15:00:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:00.119 15:00:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.119 15:00:59 -- host/auth.sh@44 -- # digest=sha256 00:24:00.119 15:00:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.119 15:00:59 -- host/auth.sh@44 -- # keyid=4 00:24:00.119 15:00:59 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:00.119 15:00:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:00.119 15:00:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:00.119 15:00:59 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:00.119 15:01:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:00.119 15:01:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.119 15:01:00 -- host/auth.sh@68 -- # digest=sha256 00:24:00.119 15:01:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:00.119 15:01:00 -- host/auth.sh@68 -- # keyid=4 00:24:00.119 15:01:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.119 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.119 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.119 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.119 15:01:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.119 15:01:00 -- nvmf/common.sh@717 -- # local ip 00:24:00.119 15:01:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.119 15:01:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.119 15:01:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.119 15:01:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.119 15:01:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:00.119 15:01:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:00.119 15:01:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:00.119 15:01:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:00.119 15:01:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:00.119 15:01:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.119 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.119 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.119 nvme0n1 00:24:00.119 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.119 15:01:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.119 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.119 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.119 15:01:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.377 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.377 15:01:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.377 15:01:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.377 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.377 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.377 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.377 15:01:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.377 15:01:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.377 15:01:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:00.377 15:01:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.377 15:01:00 -- host/auth.sh@44 -- # digest=sha256 00:24:00.377 15:01:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.377 15:01:00 -- host/auth.sh@44 -- # keyid=0 00:24:00.377 15:01:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:00.377 15:01:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:00.377 15:01:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:00.636 15:01:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:00.636 15:01:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:00.636 15:01:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.636 15:01:00 -- host/auth.sh@68 -- # digest=sha256 00:24:00.636 15:01:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:00.636 15:01:00 -- host/auth.sh@68 -- # keyid=0 00:24:00.636 15:01:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.636 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.636 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.636 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.636 15:01:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.636 15:01:00 -- nvmf/common.sh@717 -- # local ip 00:24:00.636 15:01:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.636 15:01:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.636 15:01:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.636 15:01:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.636 15:01:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:00.636 15:01:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:00.636 15:01:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:00.636 15:01:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:00.636 15:01:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:00.636 15:01:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:00.636 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.636 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.896 nvme0n1 00:24:00.896 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.896 15:01:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.896 15:01:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.896 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.896 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.896 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.896 15:01:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.896 15:01:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.896 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.896 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.896 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.896 15:01:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.896 15:01:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:00.896 15:01:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.896 15:01:00 -- host/auth.sh@44 -- # digest=sha256 00:24:00.896 15:01:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.896 15:01:00 -- host/auth.sh@44 -- # keyid=1 00:24:00.896 15:01:00 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:00.896 15:01:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:00.896 15:01:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:00.896 15:01:00 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:00.896 15:01:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:00.896 15:01:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.896 15:01:00 -- host/auth.sh@68 -- # digest=sha256 00:24:00.896 15:01:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:00.896 15:01:00 -- host/auth.sh@68 -- # keyid=1 00:24:00.896 15:01:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.896 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.896 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.896 15:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.896 15:01:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.896 15:01:00 -- nvmf/common.sh@717 -- # local ip 00:24:00.896 15:01:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.896 15:01:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.896 15:01:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.896 15:01:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.896 15:01:00 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:00.896 15:01:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:00.896 15:01:00 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:00.896 15:01:00 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:00.896 15:01:00 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:00.896 15:01:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:00.896 15:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.896 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 nvme0n1 00:24:01.156 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 15:01:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.156 15:01:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.156 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 15:01:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.156 15:01:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.156 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 15:01:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.156 15:01:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:01.156 15:01:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.156 15:01:01 -- host/auth.sh@44 -- # digest=sha256 00:24:01.156 15:01:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.156 15:01:01 -- host/auth.sh@44 -- # keyid=2 00:24:01.156 15:01:01 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:01.156 15:01:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:01.156 15:01:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:01.156 15:01:01 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:01.156 15:01:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:01.156 15:01:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.156 15:01:01 -- host/auth.sh@68 -- # digest=sha256 00:24:01.156 15:01:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:01.156 15:01:01 -- host/auth.sh@68 -- # keyid=2 00:24:01.156 15:01:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.156 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 15:01:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.156 15:01:01 -- nvmf/common.sh@717 -- # local ip 00:24:01.156 15:01:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.156 15:01:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.156 15:01:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.156 15:01:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.156 15:01:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:01.156 15:01:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:01.156 15:01:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:01.156 15:01:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:01.156 15:01:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:01.156 15:01:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:01.156 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.414 nvme0n1 00:24:01.414 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.414 15:01:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.414 15:01:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.414 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.414 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.414 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.414 15:01:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.414 15:01:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.414 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.414 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.414 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.414 15:01:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.414 15:01:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:01.414 15:01:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.414 15:01:01 -- host/auth.sh@44 -- # digest=sha256 00:24:01.414 15:01:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.414 15:01:01 -- host/auth.sh@44 -- # keyid=3 00:24:01.414 15:01:01 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:01.414 15:01:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:01.414 15:01:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:01.414 15:01:01 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:01.414 15:01:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:01.414 15:01:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.414 15:01:01 -- host/auth.sh@68 -- # digest=sha256 00:24:01.414 15:01:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:01.414 15:01:01 -- host/auth.sh@68 -- # keyid=3 00:24:01.414 15:01:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.414 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.414 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.672 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.672 15:01:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.672 15:01:01 -- nvmf/common.sh@717 -- # local ip 00:24:01.672 15:01:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.672 15:01:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.672 15:01:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.672 15:01:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.672 15:01:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:01.672 15:01:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:01.672 15:01:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:01.672 15:01:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:01.672 15:01:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:01.672 15:01:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:01.672 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.672 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.672 nvme0n1 00:24:01.672 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.672 15:01:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.672 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.672 15:01:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.672 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.672 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.932 15:01:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.932 15:01:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.932 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.932 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.932 15:01:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.932 15:01:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:01.932 15:01:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.932 15:01:01 -- host/auth.sh@44 -- # digest=sha256 00:24:01.932 15:01:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.932 15:01:01 -- host/auth.sh@44 -- # keyid=4 00:24:01.932 15:01:01 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:01.932 15:01:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:01.932 15:01:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:01.932 15:01:01 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:01.932 15:01:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:01.932 15:01:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.932 15:01:01 -- host/auth.sh@68 -- # digest=sha256 00:24:01.932 15:01:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:01.932 15:01:01 -- host/auth.sh@68 -- # keyid=4 00:24:01.932 15:01:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.932 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.932 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:01.932 15:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.932 15:01:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.932 15:01:01 -- nvmf/common.sh@717 -- # local ip 00:24:01.932 15:01:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.932 15:01:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.932 15:01:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.932 15:01:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.932 15:01:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:01.932 15:01:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:01.932 15:01:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:01.932 15:01:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:01.932 15:01:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:01.932 15:01:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.932 15:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.932 15:01:01 -- common/autotest_common.sh@10 -- # set +x 00:24:02.189 nvme0n1 00:24:02.189 15:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.189 15:01:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.189 15:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.189 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:24:02.189 15:01:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.189 15:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.189 15:01:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.189 15:01:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.189 15:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.189 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:24:02.189 15:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.189 15:01:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.189 15:01:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.189 15:01:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:02.189 15:01:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.189 15:01:02 -- host/auth.sh@44 -- # digest=sha256 00:24:02.189 15:01:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.189 15:01:02 -- host/auth.sh@44 -- # keyid=0 00:24:02.189 15:01:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:02.189 15:01:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:02.189 15:01:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:02.755 15:01:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:02.755 15:01:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:02.755 15:01:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.755 15:01:02 -- host/auth.sh@68 -- # digest=sha256 00:24:02.755 15:01:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:02.755 15:01:02 -- host/auth.sh@68 -- # keyid=0 00:24:02.755 15:01:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.755 15:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.755 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:24:02.755 15:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.755 15:01:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.755 15:01:02 -- nvmf/common.sh@717 -- # local ip 00:24:02.755 15:01:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.755 15:01:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.755 15:01:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.755 15:01:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.755 15:01:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:02.755 15:01:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:02.755 15:01:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:02.755 15:01:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:02.755 15:01:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:02.755 15:01:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:02.755 15:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.755 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:24:03.011 nvme0n1 00:24:03.011 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.011 15:01:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.011 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.011 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.011 15:01:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.011 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.011 15:01:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.011 15:01:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.011 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.011 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.269 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.269 15:01:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.269 15:01:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:03.269 15:01:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.269 15:01:03 -- host/auth.sh@44 -- # digest=sha256 00:24:03.269 15:01:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.269 15:01:03 -- host/auth.sh@44 -- # keyid=1 00:24:03.269 15:01:03 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:03.269 15:01:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:03.269 15:01:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:03.269 15:01:03 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:03.269 15:01:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:03.269 15:01:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.269 15:01:03 -- host/auth.sh@68 -- # digest=sha256 00:24:03.269 15:01:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:03.269 15:01:03 -- host/auth.sh@68 -- # keyid=1 00:24:03.269 15:01:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.269 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.269 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.269 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.269 15:01:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.269 15:01:03 -- nvmf/common.sh@717 -- # local ip 00:24:03.269 15:01:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.269 15:01:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.269 15:01:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.269 15:01:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.269 15:01:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:03.269 15:01:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:03.269 15:01:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:03.269 15:01:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:03.269 15:01:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:03.269 15:01:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:03.269 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.269 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.529 nvme0n1 00:24:03.529 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.529 15:01:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.529 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.529 15:01:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.529 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.529 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.529 15:01:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.529 15:01:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.529 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.529 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.529 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.529 15:01:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.529 15:01:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:03.529 15:01:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.529 15:01:03 -- host/auth.sh@44 -- # digest=sha256 00:24:03.529 15:01:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.529 15:01:03 -- host/auth.sh@44 -- # keyid=2 00:24:03.529 15:01:03 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:03.529 15:01:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:03.529 15:01:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:03.529 15:01:03 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:03.529 15:01:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:03.529 15:01:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.529 15:01:03 -- host/auth.sh@68 -- # digest=sha256 00:24:03.529 15:01:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:03.529 15:01:03 -- host/auth.sh@68 -- # keyid=2 00:24:03.529 15:01:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.529 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.529 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.529 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.529 15:01:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.529 15:01:03 -- nvmf/common.sh@717 -- # local ip 00:24:03.529 15:01:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.529 15:01:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.529 15:01:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.529 15:01:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.529 15:01:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:03.529 15:01:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:03.529 15:01:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:03.529 15:01:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:03.529 15:01:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:03.529 15:01:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:03.529 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.529 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.788 nvme0n1 00:24:03.788 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.788 15:01:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.788 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.788 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:03.788 15:01:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.788 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.788 15:01:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.788 15:01:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.788 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.788 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:04.046 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.046 15:01:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.046 15:01:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:04.046 15:01:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.046 15:01:03 -- host/auth.sh@44 -- # digest=sha256 00:24:04.046 15:01:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.046 15:01:03 -- host/auth.sh@44 -- # keyid=3 00:24:04.046 15:01:03 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:04.046 15:01:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:04.046 15:01:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:04.046 15:01:03 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:04.046 15:01:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:04.046 15:01:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:04.046 15:01:03 -- host/auth.sh@68 -- # digest=sha256 00:24:04.046 15:01:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:04.046 15:01:03 -- host/auth.sh@68 -- # keyid=3 00:24:04.046 15:01:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:04.046 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.046 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:04.046 15:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.046 15:01:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:04.046 15:01:03 -- nvmf/common.sh@717 -- # local ip 00:24:04.046 15:01:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:04.046 15:01:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:04.046 15:01:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.046 15:01:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.046 15:01:03 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:04.046 15:01:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:04.046 15:01:03 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:04.046 15:01:03 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:04.046 15:01:03 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:04.046 15:01:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:04.046 15:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.046 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 nvme0n1 00:24:04.305 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.305 15:01:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.305 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.305 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 15:01:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:04.305 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.305 15:01:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.305 15:01:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.305 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.305 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.305 15:01:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.305 15:01:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:04.305 15:01:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.305 15:01:04 -- host/auth.sh@44 -- # digest=sha256 00:24:04.305 15:01:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.305 15:01:04 -- host/auth.sh@44 -- # keyid=4 00:24:04.305 15:01:04 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:04.305 15:01:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:04.305 15:01:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:04.305 15:01:04 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:04.305 15:01:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:04.305 15:01:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:04.305 15:01:04 -- host/auth.sh@68 -- # digest=sha256 00:24:04.305 15:01:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:04.305 15:01:04 -- host/auth.sh@68 -- # keyid=4 00:24:04.305 15:01:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:04.305 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.305 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.305 15:01:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:04.305 15:01:04 -- nvmf/common.sh@717 -- # local ip 00:24:04.305 15:01:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:04.305 15:01:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:04.305 15:01:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.305 15:01:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.305 15:01:04 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:04.305 15:01:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:04.305 15:01:04 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:04.305 15:01:04 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:04.305 15:01:04 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:04.305 15:01:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.305 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.305 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.869 nvme0n1 00:24:04.869 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.869 15:01:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.869 15:01:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:04.869 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.869 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.869 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.869 15:01:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.869 15:01:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.869 15:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.869 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:24:04.869 15:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.869 15:01:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.869 15:01:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.869 15:01:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:04.869 15:01:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.869 15:01:04 -- host/auth.sh@44 -- # digest=sha256 00:24:04.869 15:01:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.869 15:01:04 -- host/auth.sh@44 -- # keyid=0 00:24:04.869 15:01:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:04.869 15:01:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:04.869 15:01:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:06.788 15:01:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:06.788 15:01:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:06.788 15:01:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.788 15:01:06 -- host/auth.sh@68 -- # digest=sha256 00:24:06.788 15:01:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:06.788 15:01:06 -- host/auth.sh@68 -- # keyid=0 00:24:06.788 15:01:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.788 15:01:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.788 15:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:06.788 15:01:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.788 15:01:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.788 15:01:06 -- nvmf/common.sh@717 -- # local ip 00:24:06.788 15:01:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.788 15:01:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.788 15:01:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.788 15:01:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.788 15:01:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:06.788 15:01:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:06.788 15:01:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:06.788 15:01:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:06.788 15:01:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:06.788 15:01:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:06.788 15:01:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.788 15:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:07.047 nvme0n1 00:24:07.047 15:01:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.047 15:01:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.047 15:01:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.047 15:01:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.047 15:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:07.047 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.047 15:01:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.047 15:01:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.047 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.047 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.047 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.047 15:01:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.047 15:01:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:07.047 15:01:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.047 15:01:07 -- host/auth.sh@44 -- # digest=sha256 00:24:07.047 15:01:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.047 15:01:07 -- host/auth.sh@44 -- # keyid=1 00:24:07.047 15:01:07 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:07.047 15:01:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.047 15:01:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:07.047 15:01:07 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:07.047 15:01:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:07.047 15:01:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.047 15:01:07 -- host/auth.sh@68 -- # digest=sha256 00:24:07.047 15:01:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:07.047 15:01:07 -- host/auth.sh@68 -- # keyid=1 00:24:07.047 15:01:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.047 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.047 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.047 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.047 15:01:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.047 15:01:07 -- nvmf/common.sh@717 -- # local ip 00:24:07.047 15:01:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.047 15:01:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.047 15:01:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.047 15:01:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.047 15:01:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:07.047 15:01:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.047 15:01:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.047 15:01:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:07.047 15:01:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:07.047 15:01:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:07.047 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.047 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.615 nvme0n1 00:24:07.615 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.615 15:01:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.615 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.615 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.615 15:01:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.615 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.615 15:01:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.615 15:01:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.615 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.615 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.879 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.880 15:01:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.880 15:01:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:07.880 15:01:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.880 15:01:07 -- host/auth.sh@44 -- # digest=sha256 00:24:07.880 15:01:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.880 15:01:07 -- host/auth.sh@44 -- # keyid=2 00:24:07.880 15:01:07 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:07.880 15:01:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.880 15:01:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:07.880 15:01:07 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:07.880 15:01:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:07.880 15:01:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.880 15:01:07 -- host/auth.sh@68 -- # digest=sha256 00:24:07.880 15:01:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:07.880 15:01:07 -- host/auth.sh@68 -- # keyid=2 00:24:07.880 15:01:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.880 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.880 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:07.880 15:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.880 15:01:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.880 15:01:07 -- nvmf/common.sh@717 -- # local ip 00:24:07.880 15:01:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.880 15:01:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.880 15:01:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.880 15:01:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.880 15:01:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:07.880 15:01:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:07.880 15:01:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:07.880 15:01:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:07.880 15:01:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:07.880 15:01:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.880 15:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.880 15:01:07 -- common/autotest_common.sh@10 -- # set +x 00:24:08.448 nvme0n1 00:24:08.448 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.448 15:01:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.448 15:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.448 15:01:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.448 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:24:08.448 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.448 15:01:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.448 15:01:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.448 15:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.448 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:24:08.448 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.448 15:01:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.448 15:01:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:08.448 15:01:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.448 15:01:08 -- host/auth.sh@44 -- # digest=sha256 00:24:08.448 15:01:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.448 15:01:08 -- host/auth.sh@44 -- # keyid=3 00:24:08.448 15:01:08 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:08.448 15:01:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.448 15:01:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:08.448 15:01:08 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:08.448 15:01:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:08.448 15:01:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.448 15:01:08 -- host/auth.sh@68 -- # digest=sha256 00:24:08.448 15:01:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:08.448 15:01:08 -- host/auth.sh@68 -- # keyid=3 00:24:08.448 15:01:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:08.448 15:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.448 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:24:08.448 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.448 15:01:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.448 15:01:08 -- nvmf/common.sh@717 -- # local ip 00:24:08.448 15:01:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.448 15:01:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.448 15:01:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.448 15:01:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.448 15:01:08 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:08.448 15:01:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.448 15:01:08 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.448 15:01:08 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:08.448 15:01:08 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:08.448 15:01:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:08.448 15:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.448 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:24:09.014 nvme0n1 00:24:09.014 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.014 15:01:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.014 15:01:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.014 15:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.014 15:01:08 -- common/autotest_common.sh@10 -- # set +x 00:24:09.014 15:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.014 15:01:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.014 15:01:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.014 15:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.014 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:09.014 15:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.014 15:01:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.014 15:01:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:09.014 15:01:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.014 15:01:09 -- host/auth.sh@44 -- # digest=sha256 00:24:09.014 15:01:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.014 15:01:09 -- host/auth.sh@44 -- # keyid=4 00:24:09.014 15:01:09 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:09.014 15:01:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.014 15:01:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:09.014 15:01:09 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:09.014 15:01:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:09.014 15:01:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.014 15:01:09 -- host/auth.sh@68 -- # digest=sha256 00:24:09.014 15:01:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:09.014 15:01:09 -- host/auth.sh@68 -- # keyid=4 00:24:09.014 15:01:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.014 15:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.014 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:09.014 15:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.014 15:01:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.014 15:01:09 -- nvmf/common.sh@717 -- # local ip 00:24:09.014 15:01:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.014 15:01:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.014 15:01:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.014 15:01:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.014 15:01:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:09.014 15:01:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:09.014 15:01:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:09.014 15:01:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:09.014 15:01:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:09.014 15:01:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.014 15:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.014 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:09.581 nvme0n1 00:24:09.581 15:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.581 15:01:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.581 15:01:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.581 15:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.581 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:09.581 15:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.840 15:01:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.840 15:01:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.840 15:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.840 15:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:09.840 15:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.840 15:01:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.840 15:01:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.840 15:01:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:09.840 15:01:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.840 15:01:09 -- host/auth.sh@44 -- # digest=sha256 00:24:09.840 15:01:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.840 15:01:09 -- host/auth.sh@44 -- # keyid=0 00:24:09.840 15:01:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:09.840 15:01:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.840 15:01:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.028 15:01:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:14.028 15:01:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:14.028 15:01:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.028 15:01:13 -- host/auth.sh@68 -- # digest=sha256 00:24:14.028 15:01:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.028 15:01:13 -- host/auth.sh@68 -- # keyid=0 00:24:14.028 15:01:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.028 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.028 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:14.028 15:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.028 15:01:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.028 15:01:13 -- nvmf/common.sh@717 -- # local ip 00:24:14.028 15:01:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.028 15:01:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.029 15:01:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.029 15:01:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.029 15:01:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:14.029 15:01:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:14.029 15:01:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:14.029 15:01:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:14.029 15:01:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:14.029 15:01:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:14.029 15:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.029 15:01:13 -- common/autotest_common.sh@10 -- # set +x 00:24:14.594 nvme0n1 00:24:14.594 15:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.594 15:01:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.594 15:01:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.594 15:01:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.594 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:14.594 15:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.594 15:01:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.594 15:01:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.594 15:01:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.594 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:14.594 15:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.594 15:01:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.594 15:01:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:14.594 15:01:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.594 15:01:14 -- host/auth.sh@44 -- # digest=sha256 00:24:14.594 15:01:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.594 15:01:14 -- host/auth.sh@44 -- # keyid=1 00:24:14.594 15:01:14 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:14.594 15:01:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.594 15:01:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.594 15:01:14 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:14.594 15:01:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:14.594 15:01:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.594 15:01:14 -- host/auth.sh@68 -- # digest=sha256 00:24:14.594 15:01:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.594 15:01:14 -- host/auth.sh@68 -- # keyid=1 00:24:14.594 15:01:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.594 15:01:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.594 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:14.594 15:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.594 15:01:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.594 15:01:14 -- nvmf/common.sh@717 -- # local ip 00:24:14.594 15:01:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.594 15:01:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.594 15:01:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.594 15:01:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.594 15:01:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:14.594 15:01:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:14.594 15:01:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:14.594 15:01:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:14.594 15:01:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:14.594 15:01:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.594 15:01:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.594 15:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:15.995 nvme0n1 00:24:15.995 15:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.995 15:01:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.995 15:01:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.995 15:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.995 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.995 15:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.995 15:01:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.995 15:01:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.995 15:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.995 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.995 15:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.995 15:01:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.995 15:01:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:15.995 15:01:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.995 15:01:15 -- host/auth.sh@44 -- # digest=sha256 00:24:15.995 15:01:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.995 15:01:15 -- host/auth.sh@44 -- # keyid=2 00:24:15.995 15:01:15 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:15.995 15:01:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.995 15:01:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:15.995 15:01:15 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:15.996 15:01:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:15.996 15:01:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.996 15:01:15 -- host/auth.sh@68 -- # digest=sha256 00:24:15.996 15:01:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:15.996 15:01:15 -- host/auth.sh@68 -- # keyid=2 00:24:15.996 15:01:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.996 15:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.996 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:15.996 15:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.996 15:01:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.996 15:01:15 -- nvmf/common.sh@717 -- # local ip 00:24:15.996 15:01:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.996 15:01:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.996 15:01:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.996 15:01:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.996 15:01:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:15.996 15:01:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:15.996 15:01:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:15.996 15:01:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:15.996 15:01:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:15.996 15:01:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:15.996 15:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.996 15:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 nvme0n1 00:24:16.977 15:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.977 15:01:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.977 15:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.977 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 15:01:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.977 15:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.977 15:01:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.977 15:01:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.977 15:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.977 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 15:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.977 15:01:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.977 15:01:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:16.977 15:01:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.977 15:01:16 -- host/auth.sh@44 -- # digest=sha256 00:24:16.977 15:01:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.977 15:01:16 -- host/auth.sh@44 -- # keyid=3 00:24:16.977 15:01:16 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:16.977 15:01:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.977 15:01:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:16.977 15:01:16 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:16.977 15:01:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:16.977 15:01:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.977 15:01:16 -- host/auth.sh@68 -- # digest=sha256 00:24:16.977 15:01:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:16.977 15:01:16 -- host/auth.sh@68 -- # keyid=3 00:24:16.977 15:01:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.977 15:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.977 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 15:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.977 15:01:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.977 15:01:16 -- nvmf/common.sh@717 -- # local ip 00:24:16.977 15:01:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.977 15:01:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.977 15:01:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.977 15:01:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.977 15:01:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:16.977 15:01:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:16.977 15:01:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:16.977 15:01:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:16.977 15:01:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:16.977 15:01:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:16.977 15:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.977 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:24:17.948 nvme0n1 00:24:17.948 15:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.948 15:01:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.948 15:01:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.948 15:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.948 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:24:17.948 15:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.948 15:01:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.948 15:01:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.948 15:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.948 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:24:17.948 15:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.948 15:01:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.948 15:01:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:17.948 15:01:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.948 15:01:17 -- host/auth.sh@44 -- # digest=sha256 00:24:17.948 15:01:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.948 15:01:17 -- host/auth.sh@44 -- # keyid=4 00:24:17.948 15:01:17 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:17.948 15:01:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.948 15:01:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:17.948 15:01:17 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:17.948 15:01:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:17.948 15:01:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.949 15:01:17 -- host/auth.sh@68 -- # digest=sha256 00:24:17.949 15:01:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:17.949 15:01:17 -- host/auth.sh@68 -- # keyid=4 00:24:17.949 15:01:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.949 15:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.949 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:24:17.949 15:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.949 15:01:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.949 15:01:17 -- nvmf/common.sh@717 -- # local ip 00:24:17.949 15:01:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.949 15:01:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.949 15:01:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.949 15:01:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.949 15:01:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:17.949 15:01:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:17.949 15:01:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:17.949 15:01:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:17.949 15:01:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:17.949 15:01:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.949 15:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.949 15:01:17 -- common/autotest_common.sh@10 -- # set +x 00:24:18.893 nvme0n1 00:24:18.893 15:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.893 15:01:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.893 15:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.893 15:01:18 -- common/autotest_common.sh@10 -- # set +x 00:24:18.893 15:01:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.893 15:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.893 15:01:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.893 15:01:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.893 15:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.893 15:01:18 -- common/autotest_common.sh@10 -- # set +x 00:24:18.893 15:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.893 15:01:18 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:18.893 15:01:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.893 15:01:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.893 15:01:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:18.893 15:01:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.893 15:01:18 -- host/auth.sh@44 -- # digest=sha384 00:24:18.893 15:01:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.893 15:01:18 -- host/auth.sh@44 -- # keyid=0 00:24:18.893 15:01:18 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:18.893 15:01:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.893 15:01:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:18.893 15:01:18 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:18.893 15:01:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:18.893 15:01:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.893 15:01:18 -- host/auth.sh@68 -- # digest=sha384 00:24:18.893 15:01:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:18.893 15:01:18 -- host/auth.sh@68 -- # keyid=0 00:24:18.893 15:01:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:18.893 15:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.893 15:01:18 -- common/autotest_common.sh@10 -- # set +x 00:24:18.893 15:01:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.893 15:01:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.893 15:01:18 -- nvmf/common.sh@717 -- # local ip 00:24:18.893 15:01:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.893 15:01:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.893 15:01:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.893 15:01:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.893 15:01:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:18.893 15:01:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:18.893 15:01:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:18.893 15:01:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:18.893 15:01:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:18.893 15:01:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:18.893 15:01:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.893 15:01:18 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 nvme0n1 00:24:19.153 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.153 15:01:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.153 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 15:01:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.153 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.153 15:01:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.153 15:01:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.153 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.153 15:01:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.153 15:01:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:19.153 15:01:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.153 15:01:19 -- host/auth.sh@44 -- # digest=sha384 00:24:19.153 15:01:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.153 15:01:19 -- host/auth.sh@44 -- # keyid=1 00:24:19.153 15:01:19 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:19.153 15:01:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.153 15:01:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:19.153 15:01:19 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:19.153 15:01:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:19.153 15:01:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.153 15:01:19 -- host/auth.sh@68 -- # digest=sha384 00:24:19.153 15:01:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:19.153 15:01:19 -- host/auth.sh@68 -- # keyid=1 00:24:19.153 15:01:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.153 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.153 15:01:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.153 15:01:19 -- nvmf/common.sh@717 -- # local ip 00:24:19.153 15:01:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.153 15:01:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.153 15:01:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.153 15:01:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.153 15:01:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:19.153 15:01:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.153 15:01:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.153 15:01:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:19.153 15:01:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:19.153 15:01:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:19.153 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.153 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.410 nvme0n1 00:24:19.410 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.410 15:01:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.410 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.410 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.410 15:01:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.410 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.410 15:01:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.410 15:01:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.410 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.410 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.410 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.410 15:01:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.410 15:01:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:19.410 15:01:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.410 15:01:19 -- host/auth.sh@44 -- # digest=sha384 00:24:19.410 15:01:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.410 15:01:19 -- host/auth.sh@44 -- # keyid=2 00:24:19.410 15:01:19 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:19.410 15:01:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.410 15:01:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:19.410 15:01:19 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:19.410 15:01:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:19.410 15:01:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.410 15:01:19 -- host/auth.sh@68 -- # digest=sha384 00:24:19.410 15:01:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:19.410 15:01:19 -- host/auth.sh@68 -- # keyid=2 00:24:19.410 15:01:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.410 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.410 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.410 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.410 15:01:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.410 15:01:19 -- nvmf/common.sh@717 -- # local ip 00:24:19.410 15:01:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.410 15:01:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.410 15:01:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.410 15:01:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.410 15:01:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:19.410 15:01:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.410 15:01:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.410 15:01:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:19.410 15:01:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:19.410 15:01:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:19.411 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.411 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.667 nvme0n1 00:24:19.667 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.667 15:01:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.667 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.667 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.667 15:01:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.667 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.667 15:01:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.667 15:01:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.667 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.667 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.925 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.925 15:01:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.925 15:01:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:19.925 15:01:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.925 15:01:19 -- host/auth.sh@44 -- # digest=sha384 00:24:19.925 15:01:19 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.925 15:01:19 -- host/auth.sh@44 -- # keyid=3 00:24:19.925 15:01:19 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:19.925 15:01:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.925 15:01:19 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:19.925 15:01:19 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:19.925 15:01:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:19.925 15:01:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.925 15:01:19 -- host/auth.sh@68 -- # digest=sha384 00:24:19.925 15:01:19 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:19.925 15:01:19 -- host/auth.sh@68 -- # keyid=3 00:24:19.925 15:01:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:19.925 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.925 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.925 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.925 15:01:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.925 15:01:19 -- nvmf/common.sh@717 -- # local ip 00:24:19.925 15:01:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.925 15:01:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.925 15:01:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.925 15:01:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.925 15:01:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:19.925 15:01:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:19.925 15:01:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:19.925 15:01:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:19.925 15:01:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:19.925 15:01:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:19.925 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.925 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.925 nvme0n1 00:24:19.925 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.925 15:01:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.925 15:01:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.925 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.925 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:19.925 15:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.925 15:01:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.925 15:01:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.925 15:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.925 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:20.183 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.183 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.183 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:20.183 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.183 15:01:20 -- host/auth.sh@44 -- # digest=sha384 00:24:20.183 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.183 15:01:20 -- host/auth.sh@44 -- # keyid=4 00:24:20.183 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:20.183 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.183 15:01:20 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:20.183 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:20.183 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:20.183 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.183 15:01:20 -- host/auth.sh@68 -- # digest=sha384 00:24:20.183 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:20.183 15:01:20 -- host/auth.sh@68 -- # keyid=4 00:24:20.183 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.183 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.183 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.183 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.183 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.183 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:20.183 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.183 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.183 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.183 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.183 15:01:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:20.183 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.183 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.183 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:20.183 15:01:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:20.183 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.183 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.183 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.183 nvme0n1 00:24:20.183 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.183 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.183 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.183 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.183 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.183 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.183 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.183 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.183 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.183 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.442 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.442 15:01:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.442 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.442 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:20.442 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.442 15:01:20 -- host/auth.sh@44 -- # digest=sha384 00:24:20.442 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.442 15:01:20 -- host/auth.sh@44 -- # keyid=0 00:24:20.442 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:20.442 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.442 15:01:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:20.442 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:20.442 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:20.442 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.442 15:01:20 -- host/auth.sh@68 -- # digest=sha384 00:24:20.442 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:20.442 15:01:20 -- host/auth.sh@68 -- # keyid=0 00:24:20.442 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.442 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.442 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.442 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.442 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.442 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:20.442 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.442 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.442 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.442 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.442 15:01:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:20.442 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.442 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.442 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:20.442 15:01:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:20.442 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:20.442 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.442 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.442 nvme0n1 00:24:20.442 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.442 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.442 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.442 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.442 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.701 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.701 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.701 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.701 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.701 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.701 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.701 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.701 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:20.701 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.701 15:01:20 -- host/auth.sh@44 -- # digest=sha384 00:24:20.701 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.701 15:01:20 -- host/auth.sh@44 -- # keyid=1 00:24:20.701 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:20.701 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.701 15:01:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:20.701 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:20.701 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:20.702 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.702 15:01:20 -- host/auth.sh@68 -- # digest=sha384 00:24:20.702 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:20.702 15:01:20 -- host/auth.sh@68 -- # keyid=1 00:24:20.702 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.702 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.702 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.702 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.702 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.702 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:20.702 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.702 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.702 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.702 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.702 15:01:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:20.702 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.702 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.702 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:20.702 15:01:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:20.702 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:20.702 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.702 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.960 nvme0n1 00:24:20.960 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.960 15:01:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.960 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.960 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.960 15:01:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.960 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.960 15:01:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.960 15:01:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.960 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.960 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.960 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.960 15:01:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.960 15:01:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:20.960 15:01:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.960 15:01:20 -- host/auth.sh@44 -- # digest=sha384 00:24:20.960 15:01:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.960 15:01:20 -- host/auth.sh@44 -- # keyid=2 00:24:20.960 15:01:20 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:20.960 15:01:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.960 15:01:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:20.960 15:01:20 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:20.960 15:01:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:20.960 15:01:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.960 15:01:20 -- host/auth.sh@68 -- # digest=sha384 00:24:20.960 15:01:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:20.960 15:01:20 -- host/auth.sh@68 -- # keyid=2 00:24:20.960 15:01:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.960 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.961 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:20.961 15:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.961 15:01:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.961 15:01:20 -- nvmf/common.sh@717 -- # local ip 00:24:20.961 15:01:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.961 15:01:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.961 15:01:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.961 15:01:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.961 15:01:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:20.961 15:01:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:20.961 15:01:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:20.961 15:01:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:20.961 15:01:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:20.961 15:01:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:20.961 15:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.961 15:01:20 -- common/autotest_common.sh@10 -- # set +x 00:24:21.218 nvme0n1 00:24:21.218 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.218 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.218 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.218 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.218 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.218 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.218 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.218 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.218 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.218 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.218 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.218 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.218 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:21.218 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.218 15:01:21 -- host/auth.sh@44 -- # digest=sha384 00:24:21.218 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.218 15:01:21 -- host/auth.sh@44 -- # keyid=3 00:24:21.218 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:21.218 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.218 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:21.218 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:21.218 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:21.218 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.218 15:01:21 -- host/auth.sh@68 -- # digest=sha384 00:24:21.219 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:21.219 15:01:21 -- host/auth.sh@68 -- # keyid=3 00:24:21.219 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.219 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.219 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.219 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.219 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.219 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:21.219 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.219 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.219 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.219 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.219 15:01:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:21.219 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.219 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.219 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:21.219 15:01:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:21.219 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:21.219 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.219 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.475 nvme0n1 00:24:21.475 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.475 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.475 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.475 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.475 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.476 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.476 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.476 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.476 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.476 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.476 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.476 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.476 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:21.476 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.476 15:01:21 -- host/auth.sh@44 -- # digest=sha384 00:24:21.476 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.476 15:01:21 -- host/auth.sh@44 -- # keyid=4 00:24:21.476 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:21.476 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.476 15:01:21 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:21.476 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:21.476 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:21.476 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.476 15:01:21 -- host/auth.sh@68 -- # digest=sha384 00:24:21.476 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:21.476 15:01:21 -- host/auth.sh@68 -- # keyid=4 00:24:21.476 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.476 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.476 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.476 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.476 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.476 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:21.733 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.733 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.733 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.733 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.733 15:01:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:21.733 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.733 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.733 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:21.733 15:01:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:21.733 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.733 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.733 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.733 nvme0n1 00:24:21.733 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.733 15:01:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.733 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.733 15:01:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.733 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.733 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.733 15:01:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.733 15:01:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.733 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.733 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.996 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.996 15:01:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.996 15:01:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.996 15:01:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:21.996 15:01:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.996 15:01:21 -- host/auth.sh@44 -- # digest=sha384 00:24:21.996 15:01:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.996 15:01:21 -- host/auth.sh@44 -- # keyid=0 00:24:21.996 15:01:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:21.996 15:01:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.996 15:01:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:21.996 15:01:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:21.996 15:01:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:21.996 15:01:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.996 15:01:21 -- host/auth.sh@68 -- # digest=sha384 00:24:21.996 15:01:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:21.996 15:01:21 -- host/auth.sh@68 -- # keyid=0 00:24:21.996 15:01:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:21.996 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.996 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:21.996 15:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.996 15:01:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.996 15:01:21 -- nvmf/common.sh@717 -- # local ip 00:24:21.996 15:01:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.996 15:01:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.996 15:01:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.996 15:01:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.996 15:01:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:21.996 15:01:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:21.996 15:01:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:21.996 15:01:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:21.996 15:01:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:21.996 15:01:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:21.996 15:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.996 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:24:22.256 nvme0n1 00:24:22.256 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.256 15:01:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.256 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.256 15:01:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.256 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.256 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.256 15:01:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.256 15:01:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.256 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.256 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.256 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.256 15:01:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.256 15:01:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:22.256 15:01:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.256 15:01:22 -- host/auth.sh@44 -- # digest=sha384 00:24:22.256 15:01:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.256 15:01:22 -- host/auth.sh@44 -- # keyid=1 00:24:22.256 15:01:22 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:22.256 15:01:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.256 15:01:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:22.256 15:01:22 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:22.256 15:01:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:22.256 15:01:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.256 15:01:22 -- host/auth.sh@68 -- # digest=sha384 00:24:22.256 15:01:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:22.256 15:01:22 -- host/auth.sh@68 -- # keyid=1 00:24:22.256 15:01:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.256 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.256 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.256 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.256 15:01:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.256 15:01:22 -- nvmf/common.sh@717 -- # local ip 00:24:22.256 15:01:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.256 15:01:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.256 15:01:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.256 15:01:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.256 15:01:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:22.256 15:01:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.256 15:01:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.256 15:01:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:22.256 15:01:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:22.256 15:01:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:22.256 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.256 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.824 nvme0n1 00:24:22.824 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.824 15:01:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.824 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.824 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.824 15:01:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.824 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.824 15:01:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.824 15:01:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.824 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.824 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.824 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.824 15:01:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.824 15:01:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:22.824 15:01:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.824 15:01:22 -- host/auth.sh@44 -- # digest=sha384 00:24:22.824 15:01:22 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.824 15:01:22 -- host/auth.sh@44 -- # keyid=2 00:24:22.824 15:01:22 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:22.824 15:01:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.824 15:01:22 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:22.824 15:01:22 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:22.824 15:01:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:22.824 15:01:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.825 15:01:22 -- host/auth.sh@68 -- # digest=sha384 00:24:22.825 15:01:22 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:22.825 15:01:22 -- host/auth.sh@68 -- # keyid=2 00:24:22.825 15:01:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:22.825 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.825 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:22.825 15:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.825 15:01:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.825 15:01:22 -- nvmf/common.sh@717 -- # local ip 00:24:22.825 15:01:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.825 15:01:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.825 15:01:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.825 15:01:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.825 15:01:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:22.825 15:01:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:22.825 15:01:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:22.825 15:01:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:22.825 15:01:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:22.825 15:01:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:22.825 15:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.825 15:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:23.082 nvme0n1 00:24:23.082 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.082 15:01:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.082 15:01:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.082 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.082 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.082 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.082 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.082 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.082 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.082 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.083 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.083 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.083 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:23.083 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.083 15:01:23 -- host/auth.sh@44 -- # digest=sha384 00:24:23.083 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.083 15:01:23 -- host/auth.sh@44 -- # keyid=3 00:24:23.083 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:23.083 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.083 15:01:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:23.083 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:23.083 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:23.083 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.083 15:01:23 -- host/auth.sh@68 -- # digest=sha384 00:24:23.083 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:23.083 15:01:23 -- host/auth.sh@68 -- # keyid=3 00:24:23.083 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.083 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.083 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.083 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.083 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.083 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:23.083 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.083 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.083 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.083 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.083 15:01:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:23.083 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:23.083 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:23.083 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:23.083 15:01:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:23.083 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:23.083 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.083 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.648 nvme0n1 00:24:23.648 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.648 15:01:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.648 15:01:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.648 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.648 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.648 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.648 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.648 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.648 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.648 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.648 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.648 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.648 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:23.648 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.648 15:01:23 -- host/auth.sh@44 -- # digest=sha384 00:24:23.648 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.648 15:01:23 -- host/auth.sh@44 -- # keyid=4 00:24:23.648 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:23.648 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.648 15:01:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:23.648 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:23.648 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:23.648 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.648 15:01:23 -- host/auth.sh@68 -- # digest=sha384 00:24:23.648 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:23.648 15:01:23 -- host/auth.sh@68 -- # keyid=4 00:24:23.648 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.648 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.648 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.648 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.648 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.648 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:23.648 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.648 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.648 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.648 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.648 15:01:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:23.648 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:23.648 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:23.648 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:23.648 15:01:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:23.648 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.648 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.648 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.905 nvme0n1 00:24:23.905 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.905 15:01:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.905 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.905 15:01:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.905 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.905 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.905 15:01:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.905 15:01:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.905 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.905 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.905 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.905 15:01:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.905 15:01:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.905 15:01:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:23.905 15:01:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.905 15:01:23 -- host/auth.sh@44 -- # digest=sha384 00:24:23.905 15:01:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:23.905 15:01:23 -- host/auth.sh@44 -- # keyid=0 00:24:23.905 15:01:23 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:23.905 15:01:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.905 15:01:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:23.905 15:01:23 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:23.905 15:01:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:23.905 15:01:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.905 15:01:23 -- host/auth.sh@68 -- # digest=sha384 00:24:23.905 15:01:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:23.905 15:01:23 -- host/auth.sh@68 -- # keyid=0 00:24:23.905 15:01:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.905 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.905 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:23.905 15:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.905 15:01:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.905 15:01:23 -- nvmf/common.sh@717 -- # local ip 00:24:23.905 15:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.905 15:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.905 15:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.905 15:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.905 15:01:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:23.905 15:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:23.905 15:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:23.905 15:01:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:23.905 15:01:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:23.905 15:01:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:23.905 15:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.906 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:24:24.474 nvme0n1 00:24:24.474 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.474 15:01:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.474 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.474 15:01:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.474 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:24.474 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.474 15:01:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.474 15:01:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.474 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.474 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:24.733 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.733 15:01:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.733 15:01:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:24.733 15:01:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.733 15:01:24 -- host/auth.sh@44 -- # digest=sha384 00:24:24.733 15:01:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.733 15:01:24 -- host/auth.sh@44 -- # keyid=1 00:24:24.733 15:01:24 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:24.733 15:01:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.733 15:01:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:24.733 15:01:24 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:24.733 15:01:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:24.733 15:01:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.733 15:01:24 -- host/auth.sh@68 -- # digest=sha384 00:24:24.733 15:01:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:24.733 15:01:24 -- host/auth.sh@68 -- # keyid=1 00:24:24.733 15:01:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.733 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.733 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:24.733 15:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.733 15:01:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.733 15:01:24 -- nvmf/common.sh@717 -- # local ip 00:24:24.733 15:01:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.733 15:01:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.734 15:01:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.734 15:01:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.734 15:01:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:24.734 15:01:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:24.734 15:01:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:24.734 15:01:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:24.734 15:01:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:24.734 15:01:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:24.734 15:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.734 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:25.302 nvme0n1 00:24:25.302 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.302 15:01:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.302 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.302 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.302 15:01:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.302 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.302 15:01:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.302 15:01:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.302 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.302 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.302 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.302 15:01:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.302 15:01:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:25.302 15:01:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.302 15:01:25 -- host/auth.sh@44 -- # digest=sha384 00:24:25.302 15:01:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.302 15:01:25 -- host/auth.sh@44 -- # keyid=2 00:24:25.302 15:01:25 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:25.302 15:01:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.302 15:01:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:25.302 15:01:25 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:25.302 15:01:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:25.302 15:01:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.302 15:01:25 -- host/auth.sh@68 -- # digest=sha384 00:24:25.302 15:01:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:25.302 15:01:25 -- host/auth.sh@68 -- # keyid=2 00:24:25.302 15:01:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.302 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.302 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.302 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.302 15:01:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.302 15:01:25 -- nvmf/common.sh@717 -- # local ip 00:24:25.302 15:01:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.302 15:01:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.302 15:01:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.302 15:01:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.302 15:01:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:25.302 15:01:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:25.302 15:01:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:25.302 15:01:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:25.302 15:01:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:25.302 15:01:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.302 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.302 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.869 nvme0n1 00:24:25.869 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.869 15:01:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.869 15:01:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.869 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.869 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.869 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.869 15:01:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.869 15:01:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.869 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.869 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.869 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.869 15:01:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.869 15:01:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:25.869 15:01:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.869 15:01:25 -- host/auth.sh@44 -- # digest=sha384 00:24:25.869 15:01:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.869 15:01:25 -- host/auth.sh@44 -- # keyid=3 00:24:25.869 15:01:25 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:25.869 15:01:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.870 15:01:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:25.870 15:01:25 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:25.870 15:01:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:25.870 15:01:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.870 15:01:25 -- host/auth.sh@68 -- # digest=sha384 00:24:25.870 15:01:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:25.870 15:01:25 -- host/auth.sh@68 -- # keyid=3 00:24:25.870 15:01:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.870 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.870 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:25.870 15:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.870 15:01:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.870 15:01:25 -- nvmf/common.sh@717 -- # local ip 00:24:25.870 15:01:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.870 15:01:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.870 15:01:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.870 15:01:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.870 15:01:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:25.870 15:01:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:25.870 15:01:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:25.870 15:01:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:25.870 15:01:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:25.870 15:01:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.870 15:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.870 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:26.445 nvme0n1 00:24:26.445 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.445 15:01:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.445 15:01:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.445 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.445 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:26.445 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.445 15:01:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.445 15:01:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.445 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.445 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:26.445 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.445 15:01:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.445 15:01:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:26.445 15:01:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.445 15:01:26 -- host/auth.sh@44 -- # digest=sha384 00:24:26.445 15:01:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.445 15:01:26 -- host/auth.sh@44 -- # keyid=4 00:24:26.445 15:01:26 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:26.445 15:01:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.445 15:01:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:26.445 15:01:26 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:26.445 15:01:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:26.445 15:01:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.445 15:01:26 -- host/auth.sh@68 -- # digest=sha384 00:24:26.445 15:01:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:26.445 15:01:26 -- host/auth.sh@68 -- # keyid=4 00:24:26.445 15:01:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.445 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.445 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:26.445 15:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.445 15:01:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.445 15:01:26 -- nvmf/common.sh@717 -- # local ip 00:24:26.445 15:01:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.445 15:01:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.445 15:01:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.445 15:01:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.445 15:01:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:26.445 15:01:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:26.445 15:01:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:26.445 15:01:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:26.445 15:01:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:26.445 15:01:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.445 15:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.445 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 nvme0n1 00:24:27.013 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 15:01:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.013 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 15:01:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.013 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 15:01:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.013 15:01:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.013 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:27.272 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.272 15:01:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.272 15:01:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.272 15:01:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:27.272 15:01:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.272 15:01:27 -- host/auth.sh@44 -- # digest=sha384 00:24:27.272 15:01:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.272 15:01:27 -- host/auth.sh@44 -- # keyid=0 00:24:27.272 15:01:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:27.272 15:01:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.272 15:01:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:27.272 15:01:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:27.272 15:01:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:27.272 15:01:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.272 15:01:27 -- host/auth.sh@68 -- # digest=sha384 00:24:27.272 15:01:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:27.272 15:01:27 -- host/auth.sh@68 -- # keyid=0 00:24:27.272 15:01:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.272 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.272 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:27.272 15:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.273 15:01:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.273 15:01:27 -- nvmf/common.sh@717 -- # local ip 00:24:27.273 15:01:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.273 15:01:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.273 15:01:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.273 15:01:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.273 15:01:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:27.273 15:01:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:27.273 15:01:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:27.273 15:01:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:27.273 15:01:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:27.273 15:01:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:27.273 15:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.273 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:24:28.212 nvme0n1 00:24:28.212 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.212 15:01:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.212 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.212 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:28.212 15:01:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.212 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.212 15:01:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.212 15:01:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.212 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.212 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:28.212 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.212 15:01:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.212 15:01:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:28.212 15:01:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.212 15:01:28 -- host/auth.sh@44 -- # digest=sha384 00:24:28.212 15:01:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.212 15:01:28 -- host/auth.sh@44 -- # keyid=1 00:24:28.212 15:01:28 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:28.212 15:01:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.212 15:01:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:28.213 15:01:28 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:28.213 15:01:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:28.213 15:01:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.213 15:01:28 -- host/auth.sh@68 -- # digest=sha384 00:24:28.213 15:01:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:28.213 15:01:28 -- host/auth.sh@68 -- # keyid=1 00:24:28.213 15:01:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.213 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.213 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:28.213 15:01:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.213 15:01:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.213 15:01:28 -- nvmf/common.sh@717 -- # local ip 00:24:28.213 15:01:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.213 15:01:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.213 15:01:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.213 15:01:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.213 15:01:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:28.213 15:01:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:28.213 15:01:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:28.213 15:01:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:28.213 15:01:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:28.213 15:01:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:28.213 15:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.213 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:24:29.150 nvme0n1 00:24:29.150 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.150 15:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.150 15:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.150 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.150 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.150 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.150 15:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.150 15:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.150 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.150 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.150 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.150 15:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.150 15:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:29.150 15:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.150 15:01:29 -- host/auth.sh@44 -- # digest=sha384 00:24:29.150 15:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.150 15:01:29 -- host/auth.sh@44 -- # keyid=2 00:24:29.150 15:01:29 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:29.150 15:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.150 15:01:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:29.150 15:01:29 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:29.150 15:01:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:29.150 15:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.150 15:01:29 -- host/auth.sh@68 -- # digest=sha384 00:24:29.150 15:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:29.150 15:01:29 -- host/auth.sh@68 -- # keyid=2 00:24:29.150 15:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.150 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.150 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:29.150 15:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.150 15:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.150 15:01:29 -- nvmf/common.sh@717 -- # local ip 00:24:29.150 15:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.150 15:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.151 15:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.151 15:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.151 15:01:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:29.151 15:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:29.151 15:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:29.151 15:01:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:29.151 15:01:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:29.151 15:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:29.151 15:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.151 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:24:30.116 nvme0n1 00:24:30.116 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.116 15:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.116 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.116 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:30.116 15:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.116 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.374 15:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.374 15:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.374 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.374 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:30.374 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.374 15:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.374 15:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:30.374 15:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.374 15:01:30 -- host/auth.sh@44 -- # digest=sha384 00:24:30.374 15:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.374 15:01:30 -- host/auth.sh@44 -- # keyid=3 00:24:30.374 15:01:30 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:30.374 15:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:30.374 15:01:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:30.374 15:01:30 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:30.374 15:01:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:30.374 15:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.374 15:01:30 -- host/auth.sh@68 -- # digest=sha384 00:24:30.374 15:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:30.374 15:01:30 -- host/auth.sh@68 -- # keyid=3 00:24:30.374 15:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.374 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.374 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:30.374 15:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.374 15:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.374 15:01:30 -- nvmf/common.sh@717 -- # local ip 00:24:30.374 15:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.374 15:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.374 15:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.374 15:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.374 15:01:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:30.374 15:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:30.374 15:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:30.374 15:01:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:30.374 15:01:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:30.374 15:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:30.374 15:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.374 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:31.308 nvme0n1 00:24:31.308 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.308 15:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.308 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.308 15:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.308 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:31.308 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.308 15:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.308 15:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.308 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.308 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:31.308 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.308 15:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.308 15:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:31.308 15:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.308 15:01:31 -- host/auth.sh@44 -- # digest=sha384 00:24:31.308 15:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.308 15:01:31 -- host/auth.sh@44 -- # keyid=4 00:24:31.308 15:01:31 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:31.308 15:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.308 15:01:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.308 15:01:31 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:31.308 15:01:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:31.308 15:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.308 15:01:31 -- host/auth.sh@68 -- # digest=sha384 00:24:31.308 15:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.308 15:01:31 -- host/auth.sh@68 -- # keyid=4 00:24:31.308 15:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.308 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.308 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:31.308 15:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.308 15:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.308 15:01:31 -- nvmf/common.sh@717 -- # local ip 00:24:31.308 15:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.308 15:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.308 15:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.308 15:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.308 15:01:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:31.308 15:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.308 15:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.308 15:01:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:31.308 15:01:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:31.308 15:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.308 15:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.308 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:32.686 nvme0n1 00:24:32.686 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.686 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.686 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.686 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.686 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:32.687 15:01:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.687 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.687 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:32.687 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # digest=sha512 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # keyid=0 00:24:32.687 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:32.687 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.687 15:01:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:32.687 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:32.687 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # digest=sha512 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # keyid=0 00:24:32.687 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.687 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:32.687 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.687 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.687 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:32.687 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 nvme0n1 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.687 15:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:32.687 15:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # digest=sha512 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@44 -- # keyid=1 00:24:32.687 15:01:32 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:32.687 15:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.687 15:01:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:32.687 15:01:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:32.687 15:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # digest=sha512 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:32.687 15:01:32 -- host/auth.sh@68 -- # keyid=1 00:24:32.687 15:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.687 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.687 15:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.687 15:01:32 -- nvmf/common.sh@717 -- # local ip 00:24:32.687 15:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.687 15:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.687 15:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.687 15:01:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:32.687 15:01:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:32.687 15:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:32.687 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.687 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.944 nvme0n1 00:24:32.944 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.944 15:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.944 15:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.944 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.944 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.944 15:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.944 15:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.944 15:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.944 15:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.944 15:01:32 -- common/autotest_common.sh@10 -- # set +x 00:24:32.944 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.944 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.944 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:32.944 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.945 15:01:33 -- host/auth.sh@44 -- # digest=sha512 00:24:32.945 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.945 15:01:33 -- host/auth.sh@44 -- # keyid=2 00:24:32.945 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:32.945 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.945 15:01:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:32.945 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:32.945 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:32.945 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.945 15:01:33 -- host/auth.sh@68 -- # digest=sha512 00:24:32.945 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:32.945 15:01:33 -- host/auth.sh@68 -- # keyid=2 00:24:32.945 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.945 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.945 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:32.945 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.945 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.945 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:32.945 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.945 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.945 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.945 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.945 15:01:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:32.945 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:32.945 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:32.945 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:32.945 15:01:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:33.202 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:33.202 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.202 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.202 nvme0n1 00:24:33.202 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.202 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.202 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.202 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.202 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.202 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.202 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.202 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.202 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.202 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.461 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.461 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.461 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:33.461 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.461 15:01:33 -- host/auth.sh@44 -- # digest=sha512 00:24:33.461 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.461 15:01:33 -- host/auth.sh@44 -- # keyid=3 00:24:33.461 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:33.461 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.461 15:01:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:33.461 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:33.461 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:33.461 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.461 15:01:33 -- host/auth.sh@68 -- # digest=sha512 00:24:33.461 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:33.461 15:01:33 -- host/auth.sh@68 -- # keyid=3 00:24:33.461 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:33.461 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.461 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.461 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.461 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.461 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:33.461 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.461 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.461 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.461 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.461 15:01:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:33.461 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.461 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.461 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:33.461 15:01:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:33.461 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:33.461 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.461 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.461 nvme0n1 00:24:33.461 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.461 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.461 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.461 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.461 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.461 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.461 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.461 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.461 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.461 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.720 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.720 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.720 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:33.720 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.720 15:01:33 -- host/auth.sh@44 -- # digest=sha512 00:24:33.720 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.720 15:01:33 -- host/auth.sh@44 -- # keyid=4 00:24:33.720 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:33.720 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.720 15:01:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:33.720 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:33.721 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:33.721 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.721 15:01:33 -- host/auth.sh@68 -- # digest=sha512 00:24:33.721 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:33.721 15:01:33 -- host/auth.sh@68 -- # keyid=4 00:24:33.721 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:33.721 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.721 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.721 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.721 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.721 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:33.721 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.721 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.721 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.721 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.721 15:01:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:33.721 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.721 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.721 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:33.721 15:01:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:33.721 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.721 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.721 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.721 nvme0n1 00:24:33.721 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.721 15:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.721 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.721 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.721 15:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.721 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.721 15:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.721 15:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.721 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.721 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.984 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.984 15:01:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.984 15:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.984 15:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:33.984 15:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.984 15:01:33 -- host/auth.sh@44 -- # digest=sha512 00:24:33.985 15:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.985 15:01:33 -- host/auth.sh@44 -- # keyid=0 00:24:33.985 15:01:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:33.985 15:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.985 15:01:33 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:33.985 15:01:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:33.985 15:01:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:33.985 15:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.985 15:01:33 -- host/auth.sh@68 -- # digest=sha512 00:24:33.985 15:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:33.985 15:01:33 -- host/auth.sh@68 -- # keyid=0 00:24:33.985 15:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.985 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.985 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:33.985 15:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.985 15:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.985 15:01:33 -- nvmf/common.sh@717 -- # local ip 00:24:33.985 15:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.985 15:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.985 15:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.985 15:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.985 15:01:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:33.985 15:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:33.985 15:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:33.985 15:01:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:33.985 15:01:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:33.985 15:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:33.985 15:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.985 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:34.244 nvme0n1 00:24:34.244 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.244 15:01:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.244 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.244 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.244 15:01:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.244 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.244 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.244 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.244 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.244 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.244 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.244 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.244 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:34.244 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.244 15:01:34 -- host/auth.sh@44 -- # digest=sha512 00:24:34.244 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.244 15:01:34 -- host/auth.sh@44 -- # keyid=1 00:24:34.244 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:34.244 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.244 15:01:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:34.245 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:34.245 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:34.245 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.245 15:01:34 -- host/auth.sh@68 -- # digest=sha512 00:24:34.245 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:34.245 15:01:34 -- host/auth.sh@68 -- # keyid=1 00:24:34.245 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.245 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.245 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.245 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.245 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.245 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:34.245 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.245 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.245 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.245 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.245 15:01:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:34.245 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.245 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.245 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:34.245 15:01:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:34.245 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:34.245 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.245 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.502 nvme0n1 00:24:34.502 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.502 15:01:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.502 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.502 15:01:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.502 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.502 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.502 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.502 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.502 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.502 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.502 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.502 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.502 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:34.502 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.502 15:01:34 -- host/auth.sh@44 -- # digest=sha512 00:24:34.502 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.502 15:01:34 -- host/auth.sh@44 -- # keyid=2 00:24:34.502 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:34.502 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.502 15:01:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:34.502 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:34.502 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:34.502 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.502 15:01:34 -- host/auth.sh@68 -- # digest=sha512 00:24:34.502 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:34.502 15:01:34 -- host/auth.sh@68 -- # keyid=2 00:24:34.502 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.502 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.502 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.502 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.502 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.502 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:34.502 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.502 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.502 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.502 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.502 15:01:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:34.502 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.502 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.502 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:34.502 15:01:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:34.502 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.502 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.502 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.760 nvme0n1 00:24:34.760 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.760 15:01:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.760 15:01:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.760 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.760 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.760 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.760 15:01:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.760 15:01:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.760 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.760 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.760 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.760 15:01:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.760 15:01:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:34.760 15:01:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.760 15:01:34 -- host/auth.sh@44 -- # digest=sha512 00:24:34.760 15:01:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.760 15:01:34 -- host/auth.sh@44 -- # keyid=3 00:24:34.760 15:01:34 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:34.760 15:01:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:34.760 15:01:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:34.760 15:01:34 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:34.760 15:01:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:34.760 15:01:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.760 15:01:34 -- host/auth.sh@68 -- # digest=sha512 00:24:34.760 15:01:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:34.760 15:01:34 -- host/auth.sh@68 -- # keyid=3 00:24:34.760 15:01:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.760 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.760 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:34.760 15:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.760 15:01:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.760 15:01:34 -- nvmf/common.sh@717 -- # local ip 00:24:34.760 15:01:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.760 15:01:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.760 15:01:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.760 15:01:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.760 15:01:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:34.760 15:01:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:34.760 15:01:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:34.760 15:01:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:34.760 15:01:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:34.760 15:01:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:34.760 15:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.760 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:35.019 nvme0n1 00:24:35.019 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.019 15:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.019 15:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.019 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.019 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.019 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.019 15:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.019 15:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.019 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.019 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.019 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.019 15:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.019 15:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:35.019 15:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.019 15:01:35 -- host/auth.sh@44 -- # digest=sha512 00:24:35.019 15:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.019 15:01:35 -- host/auth.sh@44 -- # keyid=4 00:24:35.019 15:01:35 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:35.019 15:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.019 15:01:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:35.019 15:01:35 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:35.019 15:01:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:35.019 15:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.019 15:01:35 -- host/auth.sh@68 -- # digest=sha512 00:24:35.019 15:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:35.019 15:01:35 -- host/auth.sh@68 -- # keyid=4 00:24:35.019 15:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:35.019 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.019 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.278 15:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.278 15:01:35 -- nvmf/common.sh@717 -- # local ip 00:24:35.278 15:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.278 15:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.278 15:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.278 15:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.278 15:01:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:35.278 15:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.278 15:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.278 15:01:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:35.278 15:01:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:35.278 15:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.278 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.278 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 nvme0n1 00:24:35.278 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.278 15:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.278 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.278 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 15:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.278 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.278 15:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.278 15:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.278 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.278 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.540 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.540 15:01:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.540 15:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.540 15:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:35.540 15:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.540 15:01:35 -- host/auth.sh@44 -- # digest=sha512 00:24:35.540 15:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.540 15:01:35 -- host/auth.sh@44 -- # keyid=0 00:24:35.540 15:01:35 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:35.540 15:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.540 15:01:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:35.540 15:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:35.540 15:01:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:35.540 15:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.540 15:01:35 -- host/auth.sh@68 -- # digest=sha512 00:24:35.540 15:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:35.540 15:01:35 -- host/auth.sh@68 -- # keyid=0 00:24:35.540 15:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.540 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.540 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.540 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.540 15:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.540 15:01:35 -- nvmf/common.sh@717 -- # local ip 00:24:35.540 15:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.540 15:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.540 15:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.540 15:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.540 15:01:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:35.540 15:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.540 15:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.540 15:01:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:35.540 15:01:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:35.540 15:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:35.540 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.540 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 nvme0n1 00:24:35.800 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.800 15:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.800 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.800 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 15:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.800 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.800 15:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.800 15:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.800 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.800 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.800 15:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.800 15:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:35.800 15:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.800 15:01:35 -- host/auth.sh@44 -- # digest=sha512 00:24:35.800 15:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.800 15:01:35 -- host/auth.sh@44 -- # keyid=1 00:24:35.800 15:01:35 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:35.800 15:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:35.800 15:01:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:35.800 15:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:35.800 15:01:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:35.800 15:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.800 15:01:35 -- host/auth.sh@68 -- # digest=sha512 00:24:35.800 15:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:35.800 15:01:35 -- host/auth.sh@68 -- # keyid=1 00:24:35.800 15:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.800 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.800 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 15:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.800 15:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.800 15:01:35 -- nvmf/common.sh@717 -- # local ip 00:24:35.800 15:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.800 15:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.800 15:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.800 15:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.800 15:01:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:35.800 15:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:35.800 15:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:35.800 15:01:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:35.800 15:01:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:35.800 15:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:35.800 15:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.800 15:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:36.059 nvme0n1 00:24:36.059 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.318 15:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.318 15:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.318 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.318 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.318 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.318 15:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.318 15:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.318 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.318 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.318 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.318 15:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.318 15:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:36.318 15:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.318 15:01:36 -- host/auth.sh@44 -- # digest=sha512 00:24:36.318 15:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.318 15:01:36 -- host/auth.sh@44 -- # keyid=2 00:24:36.318 15:01:36 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:36.318 15:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.318 15:01:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:36.318 15:01:36 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:36.318 15:01:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:36.319 15:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.319 15:01:36 -- host/auth.sh@68 -- # digest=sha512 00:24:36.319 15:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:36.319 15:01:36 -- host/auth.sh@68 -- # keyid=2 00:24:36.319 15:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.319 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.319 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.319 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.319 15:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.319 15:01:36 -- nvmf/common.sh@717 -- # local ip 00:24:36.319 15:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.319 15:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.319 15:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.319 15:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.319 15:01:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:36.319 15:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:36.319 15:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:36.319 15:01:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:36.319 15:01:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:36.319 15:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.319 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.319 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.579 nvme0n1 00:24:36.579 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.579 15:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.579 15:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.579 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.579 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.579 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.579 15:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.579 15:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.579 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.579 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.579 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.579 15:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.579 15:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:36.579 15:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.579 15:01:36 -- host/auth.sh@44 -- # digest=sha512 00:24:36.579 15:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.579 15:01:36 -- host/auth.sh@44 -- # keyid=3 00:24:36.579 15:01:36 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:36.579 15:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.579 15:01:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:36.579 15:01:36 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:36.579 15:01:36 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:36.579 15:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.579 15:01:36 -- host/auth.sh@68 -- # digest=sha512 00:24:36.579 15:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:36.579 15:01:36 -- host/auth.sh@68 -- # keyid=3 00:24:36.579 15:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:36.579 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.579 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:36.579 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.579 15:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.579 15:01:36 -- nvmf/common.sh@717 -- # local ip 00:24:36.579 15:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.579 15:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.579 15:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.579 15:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.579 15:01:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:36.579 15:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:36.579 15:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:36.579 15:01:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:36.580 15:01:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:36.580 15:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:36.580 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.580 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:37.150 nvme0n1 00:24:37.150 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.150 15:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.150 15:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.150 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.150 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:37.150 15:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.150 15:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.150 15:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.150 15:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.150 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:24:37.150 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.150 15:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.150 15:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:37.150 15:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.150 15:01:37 -- host/auth.sh@44 -- # digest=sha512 00:24:37.150 15:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:37.150 15:01:37 -- host/auth.sh@44 -- # keyid=4 00:24:37.150 15:01:37 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:37.150 15:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.150 15:01:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:37.150 15:01:37 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:37.150 15:01:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:37.150 15:01:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.150 15:01:37 -- host/auth.sh@68 -- # digest=sha512 00:24:37.150 15:01:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:37.150 15:01:37 -- host/auth.sh@68 -- # keyid=4 00:24:37.150 15:01:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.150 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.150 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.150 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.150 15:01:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.150 15:01:37 -- nvmf/common.sh@717 -- # local ip 00:24:37.150 15:01:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.150 15:01:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.150 15:01:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.150 15:01:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.150 15:01:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:37.150 15:01:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.150 15:01:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.150 15:01:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:37.150 15:01:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:37.150 15:01:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.150 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.150 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.409 nvme0n1 00:24:37.409 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.409 15:01:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.409 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.409 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.409 15:01:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.409 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.409 15:01:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.409 15:01:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.409 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.409 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.409 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.409 15:01:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.409 15:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.409 15:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:37.409 15:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.409 15:01:37 -- host/auth.sh@44 -- # digest=sha512 00:24:37.409 15:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.409 15:01:37 -- host/auth.sh@44 -- # keyid=0 00:24:37.409 15:01:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:37.409 15:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.409 15:01:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:37.409 15:01:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:37.409 15:01:37 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:37.409 15:01:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.409 15:01:37 -- host/auth.sh@68 -- # digest=sha512 00:24:37.409 15:01:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:37.409 15:01:37 -- host/auth.sh@68 -- # keyid=0 00:24:37.409 15:01:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.409 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.409 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.409 15:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.409 15:01:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.409 15:01:37 -- nvmf/common.sh@717 -- # local ip 00:24:37.409 15:01:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.409 15:01:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.409 15:01:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.409 15:01:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.409 15:01:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:37.409 15:01:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.409 15:01:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.409 15:01:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:37.409 15:01:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:37.409 15:01:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:37.409 15:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.409 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:24:37.975 nvme0n1 00:24:37.975 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.975 15:01:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.975 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.975 15:01:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.975 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:37.975 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.975 15:01:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.975 15:01:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.233 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.233 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.233 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.233 15:01:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.233 15:01:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:38.233 15:01:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.233 15:01:38 -- host/auth.sh@44 -- # digest=sha512 00:24:38.233 15:01:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.233 15:01:38 -- host/auth.sh@44 -- # keyid=1 00:24:38.233 15:01:38 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:38.233 15:01:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.233 15:01:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:38.233 15:01:38 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:38.233 15:01:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:38.233 15:01:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.233 15:01:38 -- host/auth.sh@68 -- # digest=sha512 00:24:38.233 15:01:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:38.233 15:01:38 -- host/auth.sh@68 -- # keyid=1 00:24:38.233 15:01:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.233 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.233 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.233 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.233 15:01:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.233 15:01:38 -- nvmf/common.sh@717 -- # local ip 00:24:38.233 15:01:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.233 15:01:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.233 15:01:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.233 15:01:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.233 15:01:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:38.233 15:01:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.233 15:01:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.233 15:01:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:38.233 15:01:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:38.233 15:01:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:38.233 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.233 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.798 nvme0n1 00:24:38.798 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.798 15:01:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.798 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.798 15:01:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.798 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.798 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.798 15:01:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.798 15:01:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.798 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.798 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.798 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.798 15:01:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.798 15:01:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:38.798 15:01:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.798 15:01:38 -- host/auth.sh@44 -- # digest=sha512 00:24:38.798 15:01:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.798 15:01:38 -- host/auth.sh@44 -- # keyid=2 00:24:38.798 15:01:38 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:38.798 15:01:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.798 15:01:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:38.798 15:01:38 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:38.798 15:01:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:38.798 15:01:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.798 15:01:38 -- host/auth.sh@68 -- # digest=sha512 00:24:38.798 15:01:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:38.798 15:01:38 -- host/auth.sh@68 -- # keyid=2 00:24:38.798 15:01:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.798 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.798 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:38.798 15:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.798 15:01:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.798 15:01:38 -- nvmf/common.sh@717 -- # local ip 00:24:38.798 15:01:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.798 15:01:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.798 15:01:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.798 15:01:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.798 15:01:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:38.798 15:01:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.798 15:01:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.799 15:01:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:38.799 15:01:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:38.799 15:01:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:38.799 15:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.799 15:01:38 -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 nvme0n1 00:24:39.366 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.366 15:01:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.366 15:01:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.366 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.366 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.366 15:01:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.366 15:01:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.366 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.366 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.366 15:01:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.366 15:01:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:39.366 15:01:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.366 15:01:39 -- host/auth.sh@44 -- # digest=sha512 00:24:39.366 15:01:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.366 15:01:39 -- host/auth.sh@44 -- # keyid=3 00:24:39.366 15:01:39 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:39.366 15:01:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.366 15:01:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:39.366 15:01:39 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:39.366 15:01:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:39.366 15:01:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.366 15:01:39 -- host/auth.sh@68 -- # digest=sha512 00:24:39.366 15:01:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:39.366 15:01:39 -- host/auth.sh@68 -- # keyid=3 00:24:39.366 15:01:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:39.366 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.366 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.367 15:01:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.367 15:01:39 -- nvmf/common.sh@717 -- # local ip 00:24:39.367 15:01:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.367 15:01:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.367 15:01:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.367 15:01:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.367 15:01:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:39.367 15:01:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.367 15:01:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.367 15:01:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:39.367 15:01:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:39.367 15:01:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:39.367 15:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.367 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:24:39.934 nvme0n1 00:24:39.934 15:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.934 15:01:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.934 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.934 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:39.934 15:01:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.934 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.194 15:01:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.194 15:01:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.194 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.194 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.194 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.194 15:01:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.194 15:01:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:40.194 15:01:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.194 15:01:40 -- host/auth.sh@44 -- # digest=sha512 00:24:40.194 15:01:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.194 15:01:40 -- host/auth.sh@44 -- # keyid=4 00:24:40.194 15:01:40 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:40.194 15:01:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.194 15:01:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.194 15:01:40 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:40.194 15:01:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:40.194 15:01:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.194 15:01:40 -- host/auth.sh@68 -- # digest=sha512 00:24:40.194 15:01:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.194 15:01:40 -- host/auth.sh@68 -- # keyid=4 00:24:40.194 15:01:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.194 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.194 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.194 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.194 15:01:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.194 15:01:40 -- nvmf/common.sh@717 -- # local ip 00:24:40.194 15:01:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.194 15:01:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.194 15:01:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.194 15:01:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.194 15:01:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:40.194 15:01:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.194 15:01:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.194 15:01:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:40.194 15:01:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:40.194 15:01:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.194 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.194 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 nvme0n1 00:24:40.763 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.763 15:01:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.763 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.763 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 15:01:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.763 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.763 15:01:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.763 15:01:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.763 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.763 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.763 15:01:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.763 15:01:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.763 15:01:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:40.763 15:01:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.763 15:01:40 -- host/auth.sh@44 -- # digest=sha512 00:24:40.763 15:01:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.763 15:01:40 -- host/auth.sh@44 -- # keyid=0 00:24:40.763 15:01:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:40.763 15:01:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.763 15:01:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:40.763 15:01:40 -- host/auth.sh@49 -- # echo DHHC-1:00:MDdkYzUyYzdkNDg5MWNhYjg4MGZkYjQ2MzQxM2YyMmETZOrH: 00:24:40.763 15:01:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:40.763 15:01:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.763 15:01:40 -- host/auth.sh@68 -- # digest=sha512 00:24:40.763 15:01:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:40.763 15:01:40 -- host/auth.sh@68 -- # keyid=0 00:24:40.763 15:01:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.763 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.763 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:40.763 15:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.763 15:01:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.763 15:01:40 -- nvmf/common.sh@717 -- # local ip 00:24:40.763 15:01:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.763 15:01:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.763 15:01:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.763 15:01:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.763 15:01:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:40.763 15:01:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.763 15:01:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.763 15:01:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:40.763 15:01:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:40.763 15:01:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:40.763 15:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.763 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:24:41.699 nvme0n1 00:24:41.699 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.699 15:01:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.699 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.699 15:01:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.699 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.699 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.699 15:01:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.699 15:01:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.699 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.699 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.957 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.957 15:01:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.957 15:01:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:41.957 15:01:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.957 15:01:41 -- host/auth.sh@44 -- # digest=sha512 00:24:41.957 15:01:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.957 15:01:41 -- host/auth.sh@44 -- # keyid=1 00:24:41.957 15:01:41 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:41.957 15:01:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.957 15:01:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:41.958 15:01:41 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:41.958 15:01:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:41.958 15:01:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.958 15:01:41 -- host/auth.sh@68 -- # digest=sha512 00:24:41.958 15:01:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:41.958 15:01:41 -- host/auth.sh@68 -- # keyid=1 00:24:41.958 15:01:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.958 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.958 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:41.958 15:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.958 15:01:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.958 15:01:41 -- nvmf/common.sh@717 -- # local ip 00:24:41.958 15:01:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.958 15:01:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.958 15:01:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.958 15:01:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.958 15:01:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:41.958 15:01:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.958 15:01:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.958 15:01:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:41.958 15:01:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:41.958 15:01:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:41.958 15:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.958 15:01:41 -- common/autotest_common.sh@10 -- # set +x 00:24:42.895 nvme0n1 00:24:42.895 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.895 15:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.895 15:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.895 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.895 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:42.895 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.895 15:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.895 15:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.895 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.895 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:42.895 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.895 15:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.896 15:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:42.896 15:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.896 15:01:42 -- host/auth.sh@44 -- # digest=sha512 00:24:42.896 15:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.896 15:01:42 -- host/auth.sh@44 -- # keyid=2 00:24:42.896 15:01:42 -- host/auth.sh@45 -- # key=DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:42.896 15:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.896 15:01:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:42.896 15:01:42 -- host/auth.sh@49 -- # echo DHHC-1:01:YWY5Njk4OWM4ZTFiNzY4Mjg3YjA1Y2FiY2Y0NDczZDPP/6AP: 00:24:42.896 15:01:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:42.896 15:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.896 15:01:42 -- host/auth.sh@68 -- # digest=sha512 00:24:42.896 15:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:42.896 15:01:42 -- host/auth.sh@68 -- # keyid=2 00:24:42.896 15:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.896 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.896 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:42.896 15:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.896 15:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.896 15:01:42 -- nvmf/common.sh@717 -- # local ip 00:24:42.896 15:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.896 15:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.896 15:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.896 15:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.896 15:01:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:42.896 15:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.896 15:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.896 15:01:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:42.896 15:01:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:42.896 15:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:42.896 15:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.896 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:24:44.270 nvme0n1 00:24:44.270 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.270 15:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.270 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.270 15:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.270 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:24:44.270 15:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.270 15:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.270 15:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.270 15:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.270 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:24:44.270 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.270 15:01:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.270 15:01:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:44.270 15:01:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.270 15:01:44 -- host/auth.sh@44 -- # digest=sha512 00:24:44.270 15:01:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.270 15:01:44 -- host/auth.sh@44 -- # keyid=3 00:24:44.270 15:01:44 -- host/auth.sh@45 -- # key=DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:44.270 15:01:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.270 15:01:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.270 15:01:44 -- host/auth.sh@49 -- # echo DHHC-1:02:MDgyNTAxYzA4ZjViNjBiNzc4NGIzZGQzM2I3ZTlmZmYyMWI1OGY3NmY1ZDliZmFhbSWdWw==: 00:24:44.270 15:01:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:44.270 15:01:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.270 15:01:44 -- host/auth.sh@68 -- # digest=sha512 00:24:44.270 15:01:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.270 15:01:44 -- host/auth.sh@68 -- # keyid=3 00:24:44.270 15:01:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.270 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.270 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:24:44.270 15:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.270 15:01:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.270 15:01:44 -- nvmf/common.sh@717 -- # local ip 00:24:44.270 15:01:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.270 15:01:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.270 15:01:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.270 15:01:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.270 15:01:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:44.270 15:01:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.270 15:01:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.270 15:01:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:44.270 15:01:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:44.270 15:01:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:44.270 15:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.270 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:24:45.203 nvme0n1 00:24:45.203 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.203 15:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.203 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.203 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:24:45.203 15:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.203 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.203 15:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.203 15:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.203 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.203 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:24:45.203 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.203 15:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.203 15:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:45.203 15:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.203 15:01:45 -- host/auth.sh@44 -- # digest=sha512 00:24:45.203 15:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.203 15:01:45 -- host/auth.sh@44 -- # keyid=4 00:24:45.203 15:01:45 -- host/auth.sh@45 -- # key=DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:45.203 15:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.203 15:01:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:45.203 15:01:45 -- host/auth.sh@49 -- # echo DHHC-1:03:M2ZmYjg5YjcyOWEyNzk4NWFhZTA3ZTE5ODgzMGZlMGIyYTExNmQ5MjNlODkwMWUwMGY5NWJlYTA4YzM5NjZjMJc9cz4=: 00:24:45.203 15:01:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:45.203 15:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.203 15:01:45 -- host/auth.sh@68 -- # digest=sha512 00:24:45.203 15:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:45.203 15:01:45 -- host/auth.sh@68 -- # keyid=4 00:24:45.203 15:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.203 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.203 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:24:45.203 15:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.203 15:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.203 15:01:45 -- nvmf/common.sh@717 -- # local ip 00:24:45.203 15:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.203 15:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.203 15:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.203 15:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.203 15:01:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:45.203 15:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:45.203 15:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:45.203 15:01:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:45.203 15:01:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:45.203 15:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.203 15:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.203 15:01:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.135 nvme0n1 00:24:46.135 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.135 15:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.135 15:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.135 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.135 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.135 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.135 15:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.135 15:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.135 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.135 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.393 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.393 15:01:46 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:46.393 15:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.393 15:01:46 -- host/auth.sh@44 -- # digest=sha256 00:24:46.393 15:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.393 15:01:46 -- host/auth.sh@44 -- # keyid=1 00:24:46.393 15:01:46 -- host/auth.sh@45 -- # key=DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:46.393 15:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:46.393 15:01:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:46.393 15:01:46 -- host/auth.sh@49 -- # echo DHHC-1:00:N2RmOTI4NzU1NzM0ZGRmOGY1YTZjNWIxYmZmYThkNjVmYTFkN2RhYWU3OTRiOGNmRrvxyQ==: 00:24:46.393 15:01:46 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:46.393 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.393 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.394 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.394 15:01:46 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:46.394 15:01:46 -- nvmf/common.sh@717 -- # local ip 00:24:46.394 15:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.394 15:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.394 15:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:46.394 15:01:46 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.394 15:01:46 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.394 15:01:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.394 15:01:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.394 15:01:46 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:46.394 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.394 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.394 request: 00:24:46.394 { 00:24:46.394 "name": "nvme0", 00:24:46.394 "trtype": "rdma", 00:24:46.394 "traddr": "192.168.100.8", 00:24:46.394 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:46.394 "adrfam": "ipv4", 00:24:46.394 "trsvcid": "4420", 00:24:46.394 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:46.394 "method": "bdev_nvme_attach_controller", 00:24:46.394 "req_id": 1 00:24:46.394 } 00:24:46.394 Got JSON-RPC error response 00:24:46.394 response: 00:24:46.394 { 00:24:46.394 "code": -32602, 00:24:46.394 "message": "Invalid parameters" 00:24:46.394 } 00:24:46.394 15:01:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.394 15:01:46 -- common/autotest_common.sh@641 -- # es=1 00:24:46.394 15:01:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.394 15:01:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.394 15:01:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.394 15:01:46 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.394 15:01:46 -- host/auth.sh@121 -- # jq length 00:24:46.394 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.394 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.394 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.394 15:01:46 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:46.394 15:01:46 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:46.394 15:01:46 -- nvmf/common.sh@717 -- # local ip 00:24:46.394 15:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.394 15:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.394 15:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:24:46.394 15:01:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:24:46.394 15:01:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:24:46.394 15:01:46 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.394 15:01:46 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.394 15:01:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.394 15:01:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.394 15:01:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.394 15:01:46 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:46.394 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.394 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.652 request: 00:24:46.652 { 00:24:46.652 "name": "nvme0", 00:24:46.652 "trtype": "rdma", 00:24:46.652 "traddr": "192.168.100.8", 00:24:46.652 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:46.652 "adrfam": "ipv4", 00:24:46.652 "trsvcid": "4420", 00:24:46.652 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:46.652 "dhchap_key": "key2", 00:24:46.652 "method": "bdev_nvme_attach_controller", 00:24:46.652 "req_id": 1 00:24:46.652 } 00:24:46.652 Got JSON-RPC error response 00:24:46.652 response: 00:24:46.652 { 00:24:46.652 "code": -32602, 00:24:46.652 "message": "Invalid parameters" 00:24:46.652 } 00:24:46.652 15:01:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.652 15:01:46 -- common/autotest_common.sh@641 -- # es=1 00:24:46.652 15:01:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.652 15:01:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.652 15:01:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.652 15:01:46 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.652 15:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.652 15:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:46.652 15:01:46 -- host/auth.sh@127 -- # jq length 00:24:46.652 15:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.652 15:01:46 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:46.652 15:01:46 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:46.652 15:01:46 -- host/auth.sh@130 -- # cleanup 00:24:46.652 15:01:46 -- host/auth.sh@24 -- # nvmftestfini 00:24:46.652 15:01:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:46.652 15:01:46 -- nvmf/common.sh@117 -- # sync 00:24:46.652 15:01:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:46.652 15:01:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:46.652 15:01:46 -- nvmf/common.sh@120 -- # set +e 00:24:46.652 15:01:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.652 15:01:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:46.652 rmmod nvme_rdma 00:24:46.652 rmmod nvme_fabrics 00:24:46.652 15:01:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.652 15:01:46 -- nvmf/common.sh@124 -- # set -e 00:24:46.653 15:01:46 -- nvmf/common.sh@125 -- # return 0 00:24:46.653 15:01:46 -- nvmf/common.sh@478 -- # '[' -n 299302 ']' 00:24:46.653 15:01:46 -- nvmf/common.sh@479 -- # killprocess 299302 00:24:46.653 15:01:46 -- common/autotest_common.sh@936 -- # '[' -z 299302 ']' 00:24:46.653 15:01:46 -- common/autotest_common.sh@940 -- # kill -0 299302 00:24:46.653 15:01:46 -- common/autotest_common.sh@941 -- # uname 00:24:46.653 15:01:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:46.653 15:01:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 299302 00:24:46.653 15:01:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:46.653 15:01:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:46.653 15:01:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 299302' 00:24:46.653 killing process with pid 299302 00:24:46.653 15:01:46 -- common/autotest_common.sh@955 -- # kill 299302 00:24:46.653 15:01:46 -- common/autotest_common.sh@960 -- # wait 299302 00:24:48.024 15:01:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:48.024 15:01:47 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:24:48.024 15:01:47 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:48.024 15:01:47 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.024 15:01:47 -- host/auth.sh@27 -- # clean_kernel_target 00:24:48.024 15:01:47 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:48.024 15:01:47 -- nvmf/common.sh@675 -- # echo 0 00:24:48.024 15:01:47 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.024 15:01:47 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:48.024 15:01:47 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:48.024 15:01:47 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.024 15:01:47 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:48.024 15:01:47 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:24:48.024 15:01:47 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:48.962 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.962 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.963 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:48.963 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:50.861 0000:81:00.0 (8086 0a54): nvme -> vfio-pci 00:24:51.119 15:01:50 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.F0c /tmp/spdk.key-null.9QP /tmp/spdk.key-sha256.tvl /tmp/spdk.key-sha384.93z /tmp/spdk.key-sha512.IqH /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:24:51.119 15:01:50 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:52.055 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:52.055 0000:81:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:52.055 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:52.055 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:52.055 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:52.055 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:52.055 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:52.055 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:52.055 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:52.055 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:52.055 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:52.055 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:52.055 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:52.055 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:52.055 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:52.055 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:52.055 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:52.055 00:24:52.055 real 0m59.995s 00:24:52.055 user 0m52.827s 00:24:52.055 sys 0m6.127s 00:24:52.055 15:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:52.055 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:24:52.055 ************************************ 00:24:52.055 END TEST nvmf_auth 00:24:52.055 ************************************ 00:24:52.314 15:01:52 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:24:52.314 15:01:52 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:24:52.314 15:01:52 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:24:52.314 15:01:52 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:24:52.314 15:01:52 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:52.314 15:01:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:52.314 15:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.314 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:24:52.314 ************************************ 00:24:52.314 START TEST nvmf_bdevperf 00:24:52.314 ************************************ 00:24:52.314 15:01:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:24:52.314 * Looking for test storage... 00:24:52.314 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:52.314 15:01:52 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.314 15:01:52 -- nvmf/common.sh@7 -- # uname -s 00:24:52.314 15:01:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.314 15:01:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.314 15:01:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.314 15:01:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.314 15:01:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.314 15:01:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.314 15:01:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.314 15:01:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.314 15:01:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.314 15:01:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.314 15:01:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:24:52.314 15:01:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:24:52.314 15:01:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.314 15:01:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.314 15:01:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.314 15:01:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.314 15:01:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:52.314 15:01:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.314 15:01:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.314 15:01:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.314 15:01:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.314 15:01:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.314 15:01:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.314 15:01:52 -- paths/export.sh@5 -- # export PATH 00:24:52.314 15:01:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.314 15:01:52 -- nvmf/common.sh@47 -- # : 0 00:24:52.314 15:01:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.314 15:01:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.314 15:01:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.314 15:01:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.314 15:01:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.314 15:01:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.314 15:01:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.314 15:01:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.314 15:01:52 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.314 15:01:52 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.314 15:01:52 -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:52.314 15:01:52 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:24:52.314 15:01:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.314 15:01:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:52.314 15:01:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:52.314 15:01:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:52.314 15:01:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.314 15:01:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.314 15:01:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.314 15:01:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:52.314 15:01:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:52.314 15:01:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.314 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.843 15:01:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:54.843 15:01:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.843 15:01:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.843 15:01:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.843 15:01:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.843 15:01:54 -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.843 15:01:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@296 -- # e810=() 00:24:54.843 15:01:54 -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.843 15:01:54 -- nvmf/common.sh@297 -- # x722=() 00:24:54.843 15:01:54 -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.843 15:01:54 -- nvmf/common.sh@298 -- # mlx=() 00:24:54.843 15:01:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.843 15:01:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.843 15:01:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:24:54.843 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:24:54.843 15:01:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.843 15:01:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:24:54.843 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:24:54.843 15:01:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.843 15:01:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.843 15:01:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.843 15:01:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:24:54.843 Found net devices under 0000:09:00.0: mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.843 15:01:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.843 15:01:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:24:54.843 Found net devices under 0000:09:00.1: mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.843 15:01:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:54.843 15:01:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@409 -- # rdma_device_init 00:24:54.843 15:01:54 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:24:54.843 15:01:54 -- nvmf/common.sh@58 -- # uname 00:24:54.843 15:01:54 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:54.843 15:01:54 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:54.843 15:01:54 -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:54.843 15:01:54 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:54.843 15:01:54 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:54.843 15:01:54 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:54.843 15:01:54 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:54.843 15:01:54 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:54.843 15:01:54 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:24:54.843 15:01:54 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:54.843 15:01:54 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:54.843 15:01:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:54.843 15:01:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.843 15:01:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@105 -- # continue 2 00:24:54.843 15:01:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@105 -- # continue 2 00:24:54.843 15:01:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:54.843 15:01:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.843 15:01:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:54.843 15:01:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:54.843 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.843 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:24:54.843 altname enp9s0f0np0 00:24:54.843 inet 192.168.100.8/24 scope global mlx_0_0 00:24:54.843 valid_lft forever preferred_lft forever 00:24:54.843 15:01:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:54.843 15:01:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.843 15:01:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:54.843 15:01:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:54.843 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.843 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:24:54.843 altname enp9s0f1np1 00:24:54.843 inet 192.168.100.9/24 scope global mlx_0_1 00:24:54.843 valid_lft forever preferred_lft forever 00:24:54.843 15:01:54 -- nvmf/common.sh@411 -- # return 0 00:24:54.843 15:01:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:54.843 15:01:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:54.843 15:01:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:24:54.843 15:01:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:54.843 15:01:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:54.843 15:01:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:54.843 15:01:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.843 15:01:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:54.843 15:01:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@105 -- # continue 2 00:24:54.843 15:01:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.843 15:01:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.843 15:01:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@105 -- # continue 2 00:24:54.843 15:01:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:54.843 15:01:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.843 15:01:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:54.843 15:01:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.843 15:01:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.843 15:01:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:24:54.843 192.168.100.9' 00:24:54.843 15:01:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:54.843 192.168.100.9' 00:24:54.843 15:01:54 -- nvmf/common.sh@446 -- # head -n 1 00:24:54.843 15:01:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:54.843 15:01:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:24:54.843 192.168.100.9' 00:24:54.844 15:01:54 -- nvmf/common.sh@447 -- # tail -n +2 00:24:54.844 15:01:54 -- nvmf/common.sh@447 -- # head -n 1 00:24:54.844 15:01:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:54.844 15:01:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:24:54.844 15:01:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:54.844 15:01:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:24:54.844 15:01:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:24:54.844 15:01:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:24:54.844 15:01:54 -- host/bdevperf.sh@25 -- # tgt_init 00:24:54.844 15:01:54 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:54.844 15:01:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:54.844 15:01:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:54.844 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:24:54.844 15:01:54 -- nvmf/common.sh@470 -- # nvmfpid=309857 00:24:54.844 15:01:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:54.844 15:01:54 -- nvmf/common.sh@471 -- # waitforlisten 309857 00:24:54.844 15:01:54 -- common/autotest_common.sh@817 -- # '[' -z 309857 ']' 00:24:54.844 15:01:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.844 15:01:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:54.844 15:01:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.844 15:01:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:54.844 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:24:54.844 [2024-04-26 15:01:54.566119] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:54.844 [2024-04-26 15:01:54.566271] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.844 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.844 [2024-04-26 15:01:54.699502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.101 [2024-04-26 15:01:54.958261] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.101 [2024-04-26 15:01:54.958336] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.101 [2024-04-26 15:01:54.958360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.101 [2024-04-26 15:01:54.958383] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.101 [2024-04-26 15:01:54.958401] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.101 [2024-04-26 15:01:54.961177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.101 [2024-04-26 15:01:54.961248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.101 [2024-04-26 15:01:54.961249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.666 15:01:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.666 15:01:55 -- common/autotest_common.sh@850 -- # return 0 00:24:55.666 15:01:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:55.667 15:01:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:55.667 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.667 15:01:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.667 15:01:55 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:55.667 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.667 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.667 [2024-04-26 15:01:55.521167] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7fe61aa67940) succeed. 00:24:55.667 [2024-04-26 15:01:55.532005] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7fe61aa21940) succeed. 00:24:55.925 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.925 15:01:55 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.925 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.925 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.925 Malloc0 00:24:55.925 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.925 15:01:55 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.925 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.925 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.925 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.925 15:01:55 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.925 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.925 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.925 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.925 15:01:55 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:55.925 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.925 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:24:55.925 [2024-04-26 15:01:55.853162] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:55.925 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.925 15:01:55 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:55.925 15:01:55 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:55.925 15:01:55 -- nvmf/common.sh@521 -- # config=() 00:24:55.925 15:01:55 -- nvmf/common.sh@521 -- # local subsystem config 00:24:55.925 15:01:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:55.925 15:01:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:55.925 { 00:24:55.925 "params": { 00:24:55.925 "name": "Nvme$subsystem", 00:24:55.925 "trtype": "$TEST_TRANSPORT", 00:24:55.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.925 "adrfam": "ipv4", 00:24:55.925 "trsvcid": "$NVMF_PORT", 00:24:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.925 "hdgst": ${hdgst:-false}, 00:24:55.925 "ddgst": ${ddgst:-false} 00:24:55.925 }, 00:24:55.925 "method": "bdev_nvme_attach_controller" 00:24:55.925 } 00:24:55.925 EOF 00:24:55.925 )") 00:24:55.925 15:01:55 -- nvmf/common.sh@543 -- # cat 00:24:55.925 15:01:55 -- nvmf/common.sh@545 -- # jq . 00:24:55.925 15:01:55 -- nvmf/common.sh@546 -- # IFS=, 00:24:55.925 15:01:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:55.925 "params": { 00:24:55.925 "name": "Nvme1", 00:24:55.925 "trtype": "rdma", 00:24:55.925 "traddr": "192.168.100.8", 00:24:55.925 "adrfam": "ipv4", 00:24:55.925 "trsvcid": "4420", 00:24:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.925 "hdgst": false, 00:24:55.925 "ddgst": false 00:24:55.925 }, 00:24:55.925 "method": "bdev_nvme_attach_controller" 00:24:55.925 }' 00:24:55.925 [2024-04-26 15:01:55.928147] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:55.925 [2024-04-26 15:01:55.928282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310012 ] 00:24:55.925 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.182 [2024-04-26 15:01:56.050102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.445 [2024-04-26 15:01:56.282341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.703 Running I/O for 1 seconds... 00:24:58.076 00:24:58.076 Latency(us) 00:24:58.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.076 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:58.076 Verification LBA range: start 0x0 length 0x4000 00:24:58.076 Nvme1n1 : 1.01 10250.73 40.04 0.00 0.00 12407.21 5024.43 22524.97 00:24:58.076 =================================================================================================================== 00:24:58.076 Total : 10250.73 40.04 0.00 0.00 12407.21 5024.43 22524.97 00:24:59.009 15:01:58 -- host/bdevperf.sh@30 -- # bdevperfpid=310284 00:24:59.009 15:01:58 -- host/bdevperf.sh@32 -- # sleep 3 00:24:59.009 15:01:58 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:59.009 15:01:58 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:59.009 15:01:58 -- nvmf/common.sh@521 -- # config=() 00:24:59.009 15:01:58 -- nvmf/common.sh@521 -- # local subsystem config 00:24:59.009 15:01:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:59.009 15:01:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:59.009 { 00:24:59.009 "params": { 00:24:59.009 "name": "Nvme$subsystem", 00:24:59.009 "trtype": "$TEST_TRANSPORT", 00:24:59.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.009 "adrfam": "ipv4", 00:24:59.009 "trsvcid": "$NVMF_PORT", 00:24:59.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.009 "hdgst": ${hdgst:-false}, 00:24:59.009 "ddgst": ${ddgst:-false} 00:24:59.009 }, 00:24:59.009 "method": "bdev_nvme_attach_controller" 00:24:59.009 } 00:24:59.009 EOF 00:24:59.009 )") 00:24:59.009 15:01:58 -- nvmf/common.sh@543 -- # cat 00:24:59.009 15:01:58 -- nvmf/common.sh@545 -- # jq . 00:24:59.009 15:01:58 -- nvmf/common.sh@546 -- # IFS=, 00:24:59.009 15:01:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:59.009 "params": { 00:24:59.009 "name": "Nvme1", 00:24:59.009 "trtype": "rdma", 00:24:59.009 "traddr": "192.168.100.8", 00:24:59.009 "adrfam": "ipv4", 00:24:59.009 "trsvcid": "4420", 00:24:59.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.009 "hdgst": false, 00:24:59.009 "ddgst": false 00:24:59.009 }, 00:24:59.009 "method": "bdev_nvme_attach_controller" 00:24:59.009 }' 00:24:59.009 [2024-04-26 15:01:58.814900] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:59.009 [2024-04-26 15:01:58.815054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310284 ] 00:24:59.009 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.009 [2024-04-26 15:01:58.946759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.267 [2024-04-26 15:01:59.175031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.524 Running I/O for 15 seconds... 00:25:02.048 15:02:01 -- host/bdevperf.sh@33 -- # kill -9 309857 00:25:02.048 15:02:01 -- host/bdevperf.sh@35 -- # sleep 3 00:25:02.982 [2024-04-26 15:02:02.780562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.780965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.780989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.982 [2024-04-26 15:02:02.781738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.982 [2024-04-26 15:02:02.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.781787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.781810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.781836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.781860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.781886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.781909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.781934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.781958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.781984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.782994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.983 [2024-04-26 15:02:02.783775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.983 [2024-04-26 15:02:02.783798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.783824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.984 [2024-04-26 15:02:02.783847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.783875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fd000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.783900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.783927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fb000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.783952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.783978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f9000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f7000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f5000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f3000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f1000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ef000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ed000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075eb000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e9000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e7000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e5000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e3000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e1000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075df000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dd000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075db000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d9000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d7000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d5000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.784964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d3000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.784988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d1000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cf000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cd000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cb000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c9000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c7000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c5000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c3000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c1000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bf000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bd000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bb000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b9000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.984 [2024-04-26 15:02:02.785681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b7000 len:0x1000 key:0x18bd00 00:25:02.984 [2024-04-26 15:02:02.785705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b5000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.785755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b3000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.785805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b1000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.785855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075af000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.785906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ad000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.785955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.785982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ab000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a9000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a7000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a5000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a3000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a1000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759f000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759d000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759b000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007599000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007597000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007595000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007593000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007591000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758f000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758d000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758b000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007589000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007587000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.786972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.786998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007585000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.787022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.787049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007583000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.787074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.787100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007581000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.787123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.787161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757f000 len:0x1000 key:0x18bd00 00:25:02.985 [2024-04-26 15:02:02.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.789089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:02.985 [2024-04-26 15:02:02.789122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:02.985 [2024-04-26 15:02:02.789156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44544 len:8 PRP1 0x0 PRP2 0x0 00:25:02.985 [2024-04-26 15:02:02.789186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.985 [2024-04-26 15:02:02.789397] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20000b1ff180 was disconnected and freed. reset controller. 00:25:02.985 [2024-04-26 15:02:02.793781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.985 [2024-04-26 15:02:02.834338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:02.985 [2024-04-26 15:02:02.838014] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:02.985 [2024-04-26 15:02:02.838057] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:02.985 [2024-04-26 15:02:02.838081] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:25:03.920 [2024-04-26 15:02:03.842466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:03.920 [2024-04-26 15:02:03.842531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.920 [2024-04-26 15:02:03.842565] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:03.920 [2024-04-26 15:02:03.842847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.920 [2024-04-26 15:02:03.842880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.920 [2024-04-26 15:02:03.842907] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:03.920 [2024-04-26 15:02:03.846950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.920 [2024-04-26 15:02:03.855257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.920 [2024-04-26 15:02:03.858955] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:03.920 [2024-04-26 15:02:03.858997] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:03.920 [2024-04-26 15:02:03.859019] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:25:04.852 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 309857 Killed "${NVMF_APP[@]}" "$@" 00:25:04.852 15:02:04 -- host/bdevperf.sh@36 -- # tgt_init 00:25:04.852 15:02:04 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:04.852 15:02:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:04.852 15:02:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:04.852 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:25:04.852 15:02:04 -- nvmf/common.sh@470 -- # nvmfpid=311074 00:25:04.852 15:02:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:04.852 15:02:04 -- nvmf/common.sh@471 -- # waitforlisten 311074 00:25:04.852 15:02:04 -- common/autotest_common.sh@817 -- # '[' -z 311074 ']' 00:25:04.852 15:02:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.852 15:02:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.852 15:02:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.852 15:02:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.852 15:02:04 -- common/autotest_common.sh@10 -- # set +x 00:25:04.852 [2024-04-26 15:02:04.834585] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:04.852 [2024-04-26 15:02:04.834751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.852 [2024-04-26 15:02:04.863232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:04.852 [2024-04-26 15:02:04.863283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.852 [2024-04-26 15:02:04.863570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.852 [2024-04-26 15:02:04.863602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.852 [2024-04-26 15:02:04.863625] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:04.852 [2024-04-26 15:02:04.863670] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:04.852 [2024-04-26 15:02:04.867853] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.852 [2024-04-26 15:02:04.878226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.852 [2024-04-26 15:02:04.881859] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:04.852 [2024-04-26 15:02:04.881902] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:04.853 [2024-04-26 15:02:04.881924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:25:04.853 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.110 [2024-04-26 15:02:04.989794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:05.368 [2024-04-26 15:02:05.244389] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.368 [2024-04-26 15:02:05.244466] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.368 [2024-04-26 15:02:05.244490] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.368 [2024-04-26 15:02:05.244513] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.368 [2024-04-26 15:02:05.244531] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.368 [2024-04-26 15:02:05.244654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.368 [2024-04-26 15:02:05.244701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.368 [2024-04-26 15:02:05.244706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.934 15:02:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:05.934 15:02:05 -- common/autotest_common.sh@850 -- # return 0 00:25:05.934 15:02:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:05.934 15:02:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:05.934 15:02:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.934 15:02:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.934 15:02:05 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:05.934 15:02:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.934 15:02:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.934 [2024-04-26 15:02:05.784113] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000027f40/0x7f8b6a283940) succeed. 00:25:05.934 [2024-04-26 15:02:05.796358] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000280c0/0x7f8b6a23d940) succeed. 00:25:05.934 [2024-04-26 15:02:05.886001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:05.934 [2024-04-26 15:02:05.886083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.934 [2024-04-26 15:02:05.886389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.934 [2024-04-26 15:02:05.886419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.934 [2024-04-26 15:02:05.886447] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:05.934 [2024-04-26 15:02:05.890251] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.934 [2024-04-26 15:02:05.896531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.934 [2024-04-26 15:02:05.899927] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:05.934 [2024-04-26 15:02:05.899963] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:05.934 [2024-04-26 15:02:05.899999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000137ff840 00:25:06.193 15:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.193 15:02:06 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.193 15:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.193 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 Malloc0 00:25:06.193 15:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.193 15:02:06 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.193 15:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.193 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 15:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.193 15:02:06 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.193 15:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.193 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 15:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.193 15:02:06 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:06.193 15:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.193 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.193 [2024-04-26 15:02:06.130427] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:06.193 15:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.193 15:02:06 -- host/bdevperf.sh@38 -- # wait 310284 00:25:07.125 [2024-04-26 15:02:06.904292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:07.125 [2024-04-26 15:02:06.904363] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.125 [2024-04-26 15:02:06.904609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.126 [2024-04-26 15:02:06.904637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.126 [2024-04-26 15:02:06.904659] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:07.126 [2024-04-26 15:02:06.908247] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.126 [2024-04-26 15:02:06.916549] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.126 [2024-04-26 15:02:06.981478] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.239 00:25:15.239 Latency(us) 00:25:15.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.239 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.239 Verification LBA range: start 0x0 length 0x4000 00:25:15.239 Nvme1n1 : 15.01 6671.23 26.06 9140.97 0.00 8063.04 879.88 1062557.01 00:25:15.239 =================================================================================================================== 00:25:15.239 Total : 6671.23 26.06 9140.97 0.00 8063.04 879.88 1062557.01 00:25:15.808 15:02:15 -- host/bdevperf.sh@39 -- # sync 00:25:15.808 15:02:15 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.808 15:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.808 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:25:15.808 15:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.808 15:02:15 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:15.808 15:02:15 -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:15.808 15:02:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:15.808 15:02:15 -- nvmf/common.sh@117 -- # sync 00:25:15.808 15:02:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:15.808 15:02:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:15.808 15:02:15 -- nvmf/common.sh@120 -- # set +e 00:25:15.808 15:02:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.808 15:02:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:15.808 rmmod nvme_rdma 00:25:15.808 rmmod nvme_fabrics 00:25:15.808 15:02:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.808 15:02:15 -- nvmf/common.sh@124 -- # set -e 00:25:15.808 15:02:15 -- nvmf/common.sh@125 -- # return 0 00:25:15.808 15:02:15 -- nvmf/common.sh@478 -- # '[' -n 311074 ']' 00:25:15.808 15:02:15 -- nvmf/common.sh@479 -- # killprocess 311074 00:25:15.808 15:02:15 -- common/autotest_common.sh@936 -- # '[' -z 311074 ']' 00:25:15.808 15:02:15 -- common/autotest_common.sh@940 -- # kill -0 311074 00:25:15.808 15:02:15 -- common/autotest_common.sh@941 -- # uname 00:25:15.808 15:02:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.808 15:02:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 311074 00:25:15.808 15:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:15.808 15:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:15.808 15:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 311074' 00:25:15.808 killing process with pid 311074 00:25:15.808 15:02:15 -- common/autotest_common.sh@955 -- # kill 311074 00:25:15.808 15:02:15 -- common/autotest_common.sh@960 -- # wait 311074 00:25:16.374 [2024-04-26 15:02:16.219910] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:17.752 15:02:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:17.752 15:02:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:25:17.752 00:25:17.752 real 0m25.346s 00:25:17.752 user 1m16.147s 00:25:17.752 sys 0m3.465s 00:25:17.753 15:02:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:17.753 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:25:17.753 ************************************ 00:25:17.753 END TEST nvmf_bdevperf 00:25:17.753 ************************************ 00:25:17.753 15:02:17 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:17.753 15:02:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:17.753 15:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:17.753 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:25:17.753 ************************************ 00:25:17.753 START TEST nvmf_target_disconnect 00:25:17.753 ************************************ 00:25:17.753 15:02:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:17.753 * Looking for test storage... 00:25:17.753 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:17.753 15:02:17 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.753 15:02:17 -- nvmf/common.sh@7 -- # uname -s 00:25:17.753 15:02:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.753 15:02:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.753 15:02:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.753 15:02:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.753 15:02:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.753 15:02:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.753 15:02:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.753 15:02:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.753 15:02:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.753 15:02:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.753 15:02:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:17.753 15:02:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:17.753 15:02:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.753 15:02:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.753 15:02:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.753 15:02:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.753 15:02:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:17.753 15:02:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.753 15:02:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.753 15:02:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.753 15:02:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.753 15:02:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.753 15:02:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.753 15:02:17 -- paths/export.sh@5 -- # export PATH 00:25:17.753 15:02:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.753 15:02:17 -- nvmf/common.sh@47 -- # : 0 00:25:17.753 15:02:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.753 15:02:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.753 15:02:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.753 15:02:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.753 15:02:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.753 15:02:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.753 15:02:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.753 15:02:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.753 15:02:17 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:25:17.753 15:02:17 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:17.753 15:02:17 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:17.753 15:02:17 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:25:17.753 15:02:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:25:17.753 15:02:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.753 15:02:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:17.753 15:02:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:17.753 15:02:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:17.753 15:02:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.753 15:02:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.753 15:02:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.753 15:02:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:17.753 15:02:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:17.753 15:02:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.753 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.663 15:02:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:19.663 15:02:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.663 15:02:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.663 15:02:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.663 15:02:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.663 15:02:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.663 15:02:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.663 15:02:19 -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.663 15:02:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.663 15:02:19 -- nvmf/common.sh@296 -- # e810=() 00:25:19.663 15:02:19 -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.663 15:02:19 -- nvmf/common.sh@297 -- # x722=() 00:25:19.663 15:02:19 -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.663 15:02:19 -- nvmf/common.sh@298 -- # mlx=() 00:25:19.663 15:02:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.663 15:02:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.663 15:02:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:25:19.663 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:25:19.663 15:02:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.663 15:02:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:25:19.663 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:25:19.663 15:02:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.663 15:02:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.663 15:02:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.663 15:02:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:25:19.663 Found net devices under 0000:09:00.0: mlx_0_0 00:25:19.663 15:02:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.663 15:02:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.663 15:02:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:25:19.663 Found net devices under 0000:09:00.1: mlx_0_1 00:25:19.663 15:02:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.663 15:02:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:19.663 15:02:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:25:19.663 15:02:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:25:19.663 15:02:19 -- nvmf/common.sh@58 -- # uname 00:25:19.663 15:02:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:19.663 15:02:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:19.663 15:02:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:19.663 15:02:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:19.663 15:02:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:19.663 15:02:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:19.663 15:02:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:19.663 15:02:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:19.663 15:02:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:25:19.663 15:02:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:19.663 15:02:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:19.663 15:02:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:19.663 15:02:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:19.663 15:02:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:19.663 15:02:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:19.663 15:02:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:19.663 15:02:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:19.663 15:02:19 -- nvmf/common.sh@105 -- # continue 2 00:25:19.663 15:02:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.663 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:19.663 15:02:19 -- nvmf/common.sh@105 -- # continue 2 00:25:19.663 15:02:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:19.663 15:02:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:19.663 15:02:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.663 15:02:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:19.663 15:02:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:19.663 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:19.663 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:25:19.663 altname enp9s0f0np0 00:25:19.663 inet 192.168.100.8/24 scope global mlx_0_0 00:25:19.663 valid_lft forever preferred_lft forever 00:25:19.663 15:02:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:19.663 15:02:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:19.663 15:02:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.663 15:02:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.663 15:02:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:19.663 15:02:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:19.663 15:02:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:19.663 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:19.663 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:25:19.663 altname enp9s0f1np1 00:25:19.664 inet 192.168.100.9/24 scope global mlx_0_1 00:25:19.664 valid_lft forever preferred_lft forever 00:25:19.664 15:02:19 -- nvmf/common.sh@411 -- # return 0 00:25:19.664 15:02:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:19.664 15:02:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:19.664 15:02:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:25:19.664 15:02:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:25:19.664 15:02:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:19.664 15:02:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:19.664 15:02:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:19.664 15:02:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:19.664 15:02:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:19.664 15:02:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:19.664 15:02:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.664 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.664 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:19.664 15:02:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:19.664 15:02:19 -- nvmf/common.sh@105 -- # continue 2 00:25:19.664 15:02:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:19.664 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.664 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:19.664 15:02:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:19.664 15:02:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:19.664 15:02:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:19.664 15:02:19 -- nvmf/common.sh@105 -- # continue 2 00:25:19.664 15:02:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:19.664 15:02:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:19.664 15:02:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.664 15:02:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:19.664 15:02:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:19.664 15:02:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:19.664 15:02:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:19.664 15:02:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:25:19.664 192.168.100.9' 00:25:19.664 15:02:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:19.664 192.168.100.9' 00:25:19.664 15:02:19 -- nvmf/common.sh@446 -- # head -n 1 00:25:19.664 15:02:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:19.664 15:02:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:25:19.664 192.168.100.9' 00:25:19.664 15:02:19 -- nvmf/common.sh@447 -- # tail -n +2 00:25:19.664 15:02:19 -- nvmf/common.sh@447 -- # head -n 1 00:25:19.922 15:02:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:19.922 15:02:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:25:19.922 15:02:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:19.922 15:02:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:25:19.922 15:02:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:25:19.922 15:02:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:25:19.922 15:02:19 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:19.922 15:02:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:19.922 15:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.922 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:25:19.922 ************************************ 00:25:19.922 START TEST nvmf_target_disconnect_tc1 00:25:19.922 ************************************ 00:25:19.922 15:02:19 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:25:19.922 15:02:19 -- host/target_disconnect.sh@32 -- # set +e 00:25:19.922 15:02:19 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:19.922 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.181 [2024-04-26 15:02:20.058172] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:20.181 [2024-04-26 15:02:20.058263] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:20.181 [2024-04-26 15:02:20.058289] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6f00 00:25:21.114 [2024-04-26 15:02:21.062709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:21.114 [2024-04-26 15:02:21.062796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:21.114 [2024-04-26 15:02:21.062828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:25:21.114 [2024-04-26 15:02:21.062938] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:21.114 [2024-04-26 15:02:21.062973] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:21.114 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:25:21.114 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:21.114 Initializing NVMe Controllers 00:25:21.114 15:02:21 -- host/target_disconnect.sh@33 -- # trap - ERR 00:25:21.114 15:02:21 -- host/target_disconnect.sh@33 -- # print_backtrace 00:25:21.114 15:02:21 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:25:21.114 15:02:21 -- common/autotest_common.sh@1139 -- # return 0 00:25:21.114 15:02:21 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:25:21.114 15:02:21 -- host/target_disconnect.sh@41 -- # set -e 00:25:21.114 00:25:21.114 real 0m1.301s 00:25:21.114 user 0m0.957s 00:25:21.114 sys 0m0.328s 00:25:21.114 15:02:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:21.114 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:21.115 ************************************ 00:25:21.115 END TEST nvmf_target_disconnect_tc1 00:25:21.115 ************************************ 00:25:21.115 15:02:21 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:21.115 15:02:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:21.115 15:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.115 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:21.371 ************************************ 00:25:21.371 START TEST nvmf_target_disconnect_tc2 00:25:21.371 ************************************ 00:25:21.371 15:02:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:25:21.371 15:02:21 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:25:21.371 15:02:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:21.371 15:02:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:21.371 15:02:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:21.371 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:21.371 15:02:21 -- nvmf/common.sh@470 -- # nvmfpid=314382 00:25:21.371 15:02:21 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:21.371 15:02:21 -- nvmf/common.sh@471 -- # waitforlisten 314382 00:25:21.371 15:02:21 -- common/autotest_common.sh@817 -- # '[' -z 314382 ']' 00:25:21.371 15:02:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.371 15:02:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.371 15:02:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.371 15:02:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.371 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:21.371 [2024-04-26 15:02:21.379442] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:21.371 [2024-04-26 15:02:21.379603] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.630 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.630 [2024-04-26 15:02:21.516571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.890 [2024-04-26 15:02:21.739175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.890 [2024-04-26 15:02:21.739247] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.890 [2024-04-26 15:02:21.739272] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.890 [2024-04-26 15:02:21.739291] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.890 [2024-04-26 15:02:21.739307] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.890 [2024-04-26 15:02:21.739476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:21.890 [2024-04-26 15:02:21.739583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:21.890 [2024-04-26 15:02:21.739625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:21.890 [2024-04-26 15:02:21.739631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:22.457 15:02:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.457 15:02:22 -- common/autotest_common.sh@850 -- # return 0 00:25:22.457 15:02:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:22.457 15:02:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:22.457 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 15:02:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.457 15:02:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.457 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.457 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 Malloc0 00:25:22.457 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.457 15:02:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:22.457 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.457 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.457 [2024-04-26 15:02:22.404619] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f3c7cd05940) succeed. 00:25:22.457 [2024-04-26 15:02:22.416070] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f3c7cbbd940) succeed. 00:25:22.714 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.714 15:02:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.714 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.714 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.714 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.714 15:02:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.714 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.714 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.714 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.714 15:02:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:22.714 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.714 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.714 [2024-04-26 15:02:22.743450] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:22.714 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.714 15:02:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:22.714 15:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.714 15:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:22.714 15:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.714 15:02:22 -- host/target_disconnect.sh@50 -- # reconnectpid=314657 00:25:22.714 15:02:22 -- host/target_disconnect.sh@52 -- # sleep 2 00:25:22.714 15:02:22 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:22.972 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.876 15:02:24 -- host/target_disconnect.sh@53 -- # kill -9 314382 00:25:24.876 15:02:24 -- host/target_disconnect.sh@55 -- # sleep 2 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Read completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 Write completed with error (sct=0, sc=8) 00:25:25.998 starting I/O failed 00:25:25.998 [2024-04-26 15:02:26.029015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:26.999 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 314382 Killed "${NVMF_APP[@]}" "$@" 00:25:26.999 15:02:26 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:25:26.999 15:02:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:26.999 15:02:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:26.999 15:02:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:26.999 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:25:26.999 15:02:26 -- nvmf/common.sh@470 -- # nvmfpid=315078 00:25:26.999 15:02:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:26.999 15:02:26 -- nvmf/common.sh@471 -- # waitforlisten 315078 00:25:26.999 15:02:26 -- common/autotest_common.sh@817 -- # '[' -z 315078 ']' 00:25:26.999 15:02:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.999 15:02:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:26.999 15:02:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.999 15:02:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:26.999 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:25:26.999 [2024-04-26 15:02:26.845427] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:26.999 [2024-04-26 15:02:26.845568] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.999 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.999 [2024-04-26 15:02:26.977409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Read completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:26.999 Write completed with error (sct=0, sc=8) 00:25:26.999 starting I/O failed 00:25:27.000 Read completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Read completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Read completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Read completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Read completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 Write completed with error (sct=0, sc=8) 00:25:27.000 starting I/O failed 00:25:27.000 [2024-04-26 15:02:27.034464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:27.259 [2024-04-26 15:02:27.203757] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.259 [2024-04-26 15:02:27.203823] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.259 [2024-04-26 15:02:27.203847] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.259 [2024-04-26 15:02:27.203874] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.259 [2024-04-26 15:02:27.203891] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.259 [2024-04-26 15:02:27.204040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:27.259 [2024-04-26 15:02:27.204162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:27.259 [2024-04-26 15:02:27.204267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:27.259 [2024-04-26 15:02:27.204271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:27.825 15:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:27.825 15:02:27 -- common/autotest_common.sh@850 -- # return 0 00:25:27.825 15:02:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:27.825 15:02:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.825 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:27.825 15:02:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.825 15:02:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:27.825 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.825 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:27.825 Malloc0 00:25:27.825 15:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.825 15:02:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:27.825 15:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.825 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:25:28.082 [2024-04-26 15:02:27.919925] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f6272b4a940) succeed. 00:25:28.082 [2024-04-26 15:02:27.932347] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f6272b06940) succeed. 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Write completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Write completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Write completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Read completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Write completed with error (sct=0, sc=8) 00:25:28.082 starting I/O failed 00:25:28.082 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Read completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 Write completed with error (sct=0, sc=8) 00:25:28.083 starting I/O failed 00:25:28.083 [2024-04-26 15:02:28.040053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:28.342 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.342 15:02:28 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:28.342 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.342 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:28.342 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.342 15:02:28 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.342 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.342 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:28.342 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.342 15:02:28 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:28.342 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.342 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:28.342 [2024-04-26 15:02:28.260614] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:28.342 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.342 15:02:28 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:28.342 15:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.342 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:28.342 15:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.342 15:02:28 -- host/target_disconnect.sh@58 -- # wait 314657 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Read completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 Write completed with error (sct=0, sc=8) 00:25:29.278 starting I/O failed 00:25:29.278 [2024-04-26 15:02:29.045716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:29.278 [2024-04-26 15:02:29.045764] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:29.278 A controller has encountered a failure and is being reset. 00:25:29.278 [2024-04-26 15:02:29.045868] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:29.278 [2024-04-26 15:02:29.077083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:29.278 Controller properly reset. 00:25:33.460 Initializing NVMe Controllers 00:25:33.460 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.460 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.460 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:33.460 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:33.460 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:33.460 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:33.460 Initialization complete. Launching workers. 00:25:33.460 Starting thread on core 1 00:25:33.460 Starting thread on core 2 00:25:33.460 Starting thread on core 3 00:25:33.460 Starting thread on core 0 00:25:33.460 15:02:33 -- host/target_disconnect.sh@59 -- # sync 00:25:33.460 00:25:33.460 real 0m12.018s 00:25:33.460 user 0m39.874s 00:25:33.460 sys 0m1.814s 00:25:33.460 15:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:33.460 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:25:33.460 ************************************ 00:25:33.460 END TEST nvmf_target_disconnect_tc2 00:25:33.460 ************************************ 00:25:33.460 15:02:33 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:25:33.460 15:02:33 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:25:33.460 15:02:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.460 15:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.460 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:25:33.460 ************************************ 00:25:33.460 START TEST nvmf_target_disconnect_tc3 00:25:33.460 ************************************ 00:25:33.460 15:02:33 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:25:33.460 15:02:33 -- host/target_disconnect.sh@65 -- # reconnectpid=315902 00:25:33.460 15:02:33 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:25:33.461 15:02:33 -- host/target_disconnect.sh@67 -- # sleep 2 00:25:33.461 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.363 15:02:35 -- host/target_disconnect.sh@68 -- # kill -9 315078 00:25:35.363 15:02:35 -- host/target_disconnect.sh@70 -- # sleep 2 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Write completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 Read completed with error (sct=0, sc=8) 00:25:36.740 starting I/O failed 00:25:36.740 [2024-04-26 15:02:36.691658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:37.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 315078 Killed "${NVMF_APP[@]}" "$@" 00:25:37.671 15:02:37 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:25:37.671 15:02:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:37.671 15:02:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:37.671 15:02:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:37.671 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:25:37.671 15:02:37 -- nvmf/common.sh@470 -- # nvmfpid=316426 00:25:37.671 15:02:37 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:37.671 15:02:37 -- nvmf/common.sh@471 -- # waitforlisten 316426 00:25:37.671 15:02:37 -- common/autotest_common.sh@817 -- # '[' -z 316426 ']' 00:25:37.671 15:02:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.671 15:02:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:37.671 15:02:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.671 15:02:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:37.671 15:02:37 -- common/autotest_common.sh@10 -- # set +x 00:25:37.671 [2024-04-26 15:02:37.505377] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:37.671 [2024-04-26 15:02:37.505551] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.671 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.671 [2024-04-26 15:02:37.640972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Read completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 Write completed with error (sct=0, sc=8) 00:25:37.671 starting I/O failed 00:25:37.671 [2024-04-26 15:02:37.697295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:37.928 [2024-04-26 15:02:37.865574] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.928 [2024-04-26 15:02:37.865643] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.928 [2024-04-26 15:02:37.865668] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.928 [2024-04-26 15:02:37.865687] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.928 [2024-04-26 15:02:37.865703] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.928 [2024-04-26 15:02:37.865850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:37.928 [2024-04-26 15:02:37.865900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:37.928 [2024-04-26 15:02:37.865947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.928 [2024-04-26 15:02:37.865954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:38.494 15:02:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:38.494 15:02:38 -- common/autotest_common.sh@850 -- # return 0 00:25:38.495 15:02:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:38.495 15:02:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:38.495 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:38.495 15:02:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.495 15:02:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.495 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.495 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:38.495 Malloc0 00:25:38.495 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.495 15:02:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:38.495 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.495 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:38.753 [2024-04-26 15:02:38.597464] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f9666d5a940) succeed. 00:25:38.753 [2024-04-26 15:02:38.610272] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f9666d16940) succeed. 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Write completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 Read completed with error (sct=0, sc=8) 00:25:38.753 starting I/O failed 00:25:38.753 [2024-04-26 15:02:38.702728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:39.011 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.011 15:02:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:39.011 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.011 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.011 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.011 15:02:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:39.011 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.011 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.011 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.011 15:02:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:25:39.011 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.011 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.011 [2024-04-26 15:02:38.945928] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:25:39.011 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.011 15:02:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:25:39.011 15:02:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.011 15:02:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.011 15:02:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.011 15:02:38 -- host/target_disconnect.sh@73 -- # wait 315902 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Write completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 Read completed with error (sct=0, sc=8) 00:25:39.945 starting I/O failed 00:25:39.945 [2024-04-26 15:02:39.708259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:39.945 [2024-04-26 15:02:39.708301] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:39.945 A controller has encountered a failure and is being reset. 00:25:39.945 Resorting to new failover address 192.168.100.9 00:25:39.945 [2024-04-26 15:02:39.710509] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:39.945 [2024-04-26 15:02:39.710550] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:39.945 [2024-04-26 15:02:39.710585] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3d40 00:25:40.881 [2024-04-26 15:02:40.714697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:40.881 qpair failed and we were unable to recover it. 00:25:40.881 [2024-04-26 15:02:40.716677] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:40.881 [2024-04-26 15:02:40.716722] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:40.881 [2024-04-26 15:02:40.716746] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3d40 00:25:41.823 [2024-04-26 15:02:41.720791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.823 qpair failed and we were unable to recover it. 00:25:41.823 [2024-04-26 15:02:41.722706] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:41.823 [2024-04-26 15:02:41.722742] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:41.823 [2024-04-26 15:02:41.722767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3d40 00:25:42.759 [2024-04-26 15:02:42.726761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.759 qpair failed and we were unable to recover it. 00:25:42.759 [2024-04-26 15:02:42.728499] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:42.759 [2024-04-26 15:02:42.728533] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:42.759 [2024-04-26 15:02:42.728569] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3d40 00:25:43.690 [2024-04-26 15:02:43.732636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:43.690 qpair failed and we were unable to recover it. 00:25:43.690 [2024-04-26 15:02:43.734482] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:43.690 [2024-04-26 15:02:43.734518] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:43.690 [2024-04-26 15:02:43.734559] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3d40 00:25:45.065 [2024-04-26 15:02:44.738601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:45.065 qpair failed and we were unable to recover it. 00:25:45.065 [2024-04-26 15:02:44.740678] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:45.065 [2024-04-26 15:02:44.740729] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:45.065 [2024-04-26 15:02:44.740753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3200 00:25:46.002 [2024-04-26 15:02:45.744961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.002 qpair failed and we were unable to recover it. 00:25:46.002 [2024-04-26 15:02:45.746856] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:46.002 [2024-04-26 15:02:45.746907] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:46.002 [2024-04-26 15:02:45.746930] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3200 00:25:46.937 [2024-04-26 15:02:46.750936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:46.937 qpair failed and we were unable to recover it. 00:25:46.937 [2024-04-26 15:02:46.752847] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:46.937 [2024-04-26 15:02:46.752892] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:46.937 [2024-04-26 15:02:46.752915] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4980 00:25:47.874 [2024-04-26 15:02:47.757093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:47.874 qpair failed and we were unable to recover it. 00:25:47.874 [2024-04-26 15:02:47.758985] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:47.874 [2024-04-26 15:02:47.759032] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:47.874 [2024-04-26 15:02:47.759055] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb80 00:25:48.805 [2024-04-26 15:02:48.763105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:48.805 qpair failed and we were unable to recover it. 00:25:48.805 [2024-04-26 15:02:48.764768] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:48.805 [2024-04-26 15:02:48.764822] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:48.805 [2024-04-26 15:02:48.764843] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb80 00:25:49.737 [2024-04-26 15:02:49.768874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:49.737 qpair failed and we were unable to recover it. 00:25:49.737 [2024-04-26 15:02:49.769196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.737 [2024-04-26 15:02:49.769514] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:25:49.737 [2024-04-26 15:02:49.809983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:49.994 Controller properly reset. 00:25:49.994 Initializing NVMe Controllers 00:25:49.994 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.994 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.994 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:49.994 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:49.994 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:49.994 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:49.994 Initialization complete. Launching workers. 00:25:49.994 Starting thread on core 1 00:25:49.994 Starting thread on core 2 00:25:49.994 Starting thread on core 3 00:25:49.994 Starting thread on core 0 00:25:50.251 15:02:50 -- host/target_disconnect.sh@74 -- # sync 00:25:50.251 00:25:50.251 real 0m16.679s 00:25:50.251 user 0m57.510s 00:25:50.251 sys 0m3.654s 00:25:50.251 15:02:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:50.251 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:25:50.251 ************************************ 00:25:50.251 END TEST nvmf_target_disconnect_tc3 00:25:50.251 ************************************ 00:25:50.251 15:02:50 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:50.251 15:02:50 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:25:50.251 15:02:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:50.251 15:02:50 -- nvmf/common.sh@117 -- # sync 00:25:50.251 15:02:50 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:50.251 15:02:50 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:50.251 15:02:50 -- nvmf/common.sh@120 -- # set +e 00:25:50.251 15:02:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.251 15:02:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:50.251 rmmod nvme_rdma 00:25:50.251 rmmod nvme_fabrics 00:25:50.251 15:02:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.251 15:02:50 -- nvmf/common.sh@124 -- # set -e 00:25:50.251 15:02:50 -- nvmf/common.sh@125 -- # return 0 00:25:50.251 15:02:50 -- nvmf/common.sh@478 -- # '[' -n 316426 ']' 00:25:50.251 15:02:50 -- nvmf/common.sh@479 -- # killprocess 316426 00:25:50.251 15:02:50 -- common/autotest_common.sh@936 -- # '[' -z 316426 ']' 00:25:50.251 15:02:50 -- common/autotest_common.sh@940 -- # kill -0 316426 00:25:50.251 15:02:50 -- common/autotest_common.sh@941 -- # uname 00:25:50.251 15:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:50.251 15:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 316426 00:25:50.251 15:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:25:50.251 15:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:25:50.251 15:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 316426' 00:25:50.251 killing process with pid 316426 00:25:50.251 15:02:50 -- common/autotest_common.sh@955 -- # kill 316426 00:25:50.251 15:02:50 -- common/autotest_common.sh@960 -- # wait 316426 00:25:50.815 [2024-04-26 15:02:50.710672] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:52.190 15:02:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:52.190 15:02:52 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:25:52.190 00:25:52.190 real 0m34.293s 00:25:52.190 user 2m33.261s 00:25:52.190 sys 0m7.875s 00:25:52.190 15:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 ************************************ 00:25:52.190 END TEST nvmf_target_disconnect 00:25:52.190 ************************************ 00:25:52.190 15:02:52 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:25:52.190 15:02:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 15:02:52 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:25:52.190 00:25:52.190 real 17m9.699s 00:25:52.190 user 52m25.076s 00:25:52.190 sys 2m28.284s 00:25:52.190 15:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 ************************************ 00:25:52.190 END TEST nvmf_rdma 00:25:52.190 ************************************ 00:25:52.190 15:02:52 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:52.190 15:02:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:52.190 15:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 ************************************ 00:25:52.190 START TEST spdkcli_nvmf_rdma 00:25:52.190 ************************************ 00:25:52.190 15:02:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:25:52.190 * Looking for test storage... 00:25:52.190 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:25:52.190 15:02:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:52.190 15:02:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.190 15:02:52 -- nvmf/common.sh@7 -- # uname -s 00:25:52.190 15:02:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.190 15:02:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.190 15:02:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.190 15:02:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.190 15:02:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.190 15:02:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.190 15:02:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.190 15:02:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.190 15:02:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.190 15:02:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.190 15:02:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b464f06-2980-e311-ba20-001e67a94acd 00:25:52.190 15:02:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b464f06-2980-e311-ba20-001e67a94acd 00:25:52.190 15:02:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.190 15:02:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.190 15:02:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.190 15:02:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.190 15:02:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:52.190 15:02:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.190 15:02:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.190 15:02:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.190 15:02:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.190 15:02:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.190 15:02:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.190 15:02:52 -- paths/export.sh@5 -- # export PATH 00:25:52.190 15:02:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.190 15:02:52 -- nvmf/common.sh@47 -- # : 0 00:25:52.190 15:02:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.190 15:02:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.190 15:02:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.190 15:02:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.190 15:02:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.190 15:02:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.190 15:02:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.190 15:02:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:52.190 15:02:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.190 15:02:52 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:52.190 15:02:52 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=318260 00:25:52.190 15:02:52 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:52.190 15:02:52 -- spdkcli/common.sh@34 -- # waitforlisten 318260 00:25:52.190 15:02:52 -- common/autotest_common.sh@817 -- # '[' -z 318260 ']' 00:25:52.190 15:02:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.190 15:02:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:52.190 15:02:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.190 15:02:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:52.190 15:02:52 -- common/autotest_common.sh@10 -- # set +x 00:25:52.449 [2024-04-26 15:02:52.337599] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:52.449 [2024-04-26 15:02:52.337743] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318260 ] 00:25:52.449 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.449 [2024-04-26 15:02:52.460244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:52.708 [2024-04-26 15:02:52.707187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.709 [2024-04-26 15:02:52.707190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.274 15:02:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:53.274 15:02:53 -- common/autotest_common.sh@850 -- # return 0 00:25:53.274 15:02:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:53.274 15:02:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:53.274 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:25:53.274 15:02:53 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:53.274 15:02:53 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:25:53.274 15:02:53 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:25:53.274 15:02:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:25:53.274 15:02:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.274 15:02:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:53.274 15:02:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:53.274 15:02:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:53.274 15:02:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.274 15:02:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:53.274 15:02:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.274 15:02:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:53.274 15:02:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:53.274 15:02:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.274 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:25:55.174 15:02:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:55.174 15:02:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.174 15:02:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.174 15:02:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.174 15:02:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.174 15:02:55 -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.174 15:02:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@296 -- # e810=() 00:25:55.174 15:02:55 -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.174 15:02:55 -- nvmf/common.sh@297 -- # x722=() 00:25:55.174 15:02:55 -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.174 15:02:55 -- nvmf/common.sh@298 -- # mlx=() 00:25:55.174 15:02:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.174 15:02:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.174 15:02:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:25:55.174 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:25:55.174 15:02:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:55.174 15:02:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:25:55.174 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:25:55.174 15:02:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:55.174 15:02:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.174 15:02:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.174 15:02:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:25:55.174 Found net devices under 0000:09:00.0: mlx_0_0 00:25:55.174 15:02:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.174 15:02:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.174 15:02:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:25:55.174 Found net devices under 0000:09:00.1: mlx_0_1 00:25:55.174 15:02:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.174 15:02:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:55.174 15:02:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:25:55.174 15:02:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:25:55.174 15:02:55 -- nvmf/common.sh@58 -- # uname 00:25:55.174 15:02:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:55.174 15:02:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:55.174 15:02:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:55.174 15:02:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:55.174 15:02:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:55.174 15:02:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:55.174 15:02:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:55.174 15:02:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:55.174 15:02:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:25:55.174 15:02:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:55.174 15:02:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:55.174 15:02:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:55.174 15:02:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:55.174 15:02:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:55.174 15:02:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:55.174 15:02:55 -- nvmf/common.sh@105 -- # continue 2 00:25:55.174 15:02:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.174 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:55.174 15:02:55 -- nvmf/common.sh@105 -- # continue 2 00:25:55.174 15:02:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:55.174 15:02:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:55.174 15:02:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.174 15:02:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:55.174 15:02:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:55.174 14: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:55.174 link/ether b8:59:9f:af:fe:00 brd ff:ff:ff:ff:ff:ff 00:25:55.174 altname enp9s0f0np0 00:25:55.174 inet 192.168.100.8/24 scope global mlx_0_0 00:25:55.174 valid_lft forever preferred_lft forever 00:25:55.174 15:02:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:55.174 15:02:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:55.174 15:02:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.174 15:02:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.174 15:02:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:55.174 15:02:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:55.174 15: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:55.174 link/ether b8:59:9f:af:fe:01 brd ff:ff:ff:ff:ff:ff 00:25:55.174 altname enp9s0f1np1 00:25:55.174 inet 192.168.100.9/24 scope global mlx_0_1 00:25:55.174 valid_lft forever preferred_lft forever 00:25:55.174 15:02:55 -- nvmf/common.sh@411 -- # return 0 00:25:55.174 15:02:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:55.174 15:02:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:55.174 15:02:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:25:55.174 15:02:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:25:55.174 15:02:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:55.174 15:02:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:55.174 15:02:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:55.174 15:02:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:55.174 15:02:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:55.175 15:02:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.175 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.175 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:55.175 15:02:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:55.175 15:02:55 -- nvmf/common.sh@105 -- # continue 2 00:25:55.175 15:02:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:55.175 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.175 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:55.175 15:02:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:55.175 15:02:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:55.175 15:02:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:55.175 15:02:55 -- nvmf/common.sh@105 -- # continue 2 00:25:55.175 15:02:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:55.175 15:02:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:55.175 15:02:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.175 15:02:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:55.175 15:02:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:55.175 15:02:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:55.175 15:02:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:55.175 15:02:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:25:55.175 192.168.100.9' 00:25:55.175 15:02:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:55.175 192.168.100.9' 00:25:55.175 15:02:55 -- nvmf/common.sh@446 -- # head -n 1 00:25:55.175 15:02:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:55.175 15:02:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:25:55.175 192.168.100.9' 00:25:55.175 15:02:55 -- nvmf/common.sh@447 -- # tail -n +2 00:25:55.175 15:02:55 -- nvmf/common.sh@447 -- # head -n 1 00:25:55.175 15:02:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:55.175 15:02:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:25:55.175 15:02:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:55.175 15:02:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:25:55.175 15:02:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:25:55.175 15:02:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:25:55.175 15:02:55 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:25:55.175 15:02:55 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:55.175 15:02:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:55.175 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:25:55.175 15:02:55 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:55.175 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:55.175 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:55.175 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:55.175 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:55.175 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:55.175 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:55.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:55.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:55.175 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:55.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:55.175 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:55.175 ' 00:25:55.743 [2024-04-26 15:02:55.639063] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:58.281 [2024-04-26 15:02:57.997375] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a940/0x7fe25978c940) succeed. 00:25:58.281 [2024-04-26 15:02:58.011544] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002aac0/0x7fe259748940) succeed. 00:25:59.661 [2024-04-26 15:02:59.405096] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:26:02.204 [2024-04-26 15:03:01.692934] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:26:04.100 [2024-04-26 15:03:03.667741] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:05.474 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:05.474 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:05.474 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:05.474 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:05.474 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:05.474 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:05.474 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:05.474 15:03:05 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:05.474 15:03:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:05.474 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.474 15:03:05 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:05.474 15:03:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:05.474 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.474 15:03:05 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:05.474 15:03:05 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:05.732 15:03:05 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:05.732 15:03:05 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:05.732 15:03:05 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:05.732 15:03:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:05.732 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 15:03:05 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:05.990 15:03:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:05.990 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.990 15:03:05 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:05.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:05.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:05.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:05.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:26:05.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:26:05.990 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:05.990 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:05.990 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:05.990 ' 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:26:12.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:26:12.555 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:12.555 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:12.555 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:12.555 15:03:11 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:12.555 15:03:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:12.555 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:26:12.555 15:03:11 -- spdkcli/nvmf.sh@90 -- # killprocess 318260 00:26:12.555 15:03:11 -- common/autotest_common.sh@936 -- # '[' -z 318260 ']' 00:26:12.555 15:03:11 -- common/autotest_common.sh@940 -- # kill -0 318260 00:26:12.555 15:03:11 -- common/autotest_common.sh@941 -- # uname 00:26:12.555 15:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:12.555 15:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 318260 00:26:12.555 15:03:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:12.555 15:03:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:12.555 15:03:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 318260' 00:26:12.555 killing process with pid 318260 00:26:12.555 15:03:11 -- common/autotest_common.sh@955 -- # kill 318260 00:26:12.555 [2024-04-26 15:03:11.528408] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:12.555 15:03:11 -- common/autotest_common.sh@960 -- # wait 318260 00:26:12.556 [2024-04-26 15:03:11.846138] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:26:13.123 15:03:13 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:26:13.123 15:03:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:13.123 15:03:13 -- nvmf/common.sh@117 -- # sync 00:26:13.123 15:03:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:13.123 15:03:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:13.123 15:03:13 -- nvmf/common.sh@120 -- # set +e 00:26:13.381 15:03:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.381 15:03:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:13.381 rmmod nvme_rdma 00:26:13.381 rmmod nvme_fabrics 00:26:13.382 15:03:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.382 15:03:13 -- nvmf/common.sh@124 -- # set -e 00:26:13.382 15:03:13 -- nvmf/common.sh@125 -- # return 0 00:26:13.382 15:03:13 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:26:13.382 15:03:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:13.382 15:03:13 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:26:13.382 00:26:13.382 real 0m21.056s 00:26:13.382 user 0m43.716s 00:26:13.382 sys 0m2.494s 00:26:13.382 15:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:13.382 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:26:13.382 ************************************ 00:26:13.382 END TEST spdkcli_nvmf_rdma 00:26:13.382 ************************************ 00:26:13.382 15:03:13 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:26:13.382 15:03:13 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:13.382 15:03:13 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:13.382 15:03:13 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:26:13.382 15:03:13 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:26:13.382 15:03:13 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:26:13.382 15:03:13 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:26:13.382 15:03:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:13.382 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:26:13.382 15:03:13 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:26:13.382 15:03:13 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:26:13.382 15:03:13 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:26:13.382 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:26:14.759 INFO: APP EXITING 00:26:14.759 INFO: killing all VMs 00:26:14.759 INFO: killing vhost app 00:26:14.759 INFO: EXIT DONE 00:26:16.138 Waiting for block devices as requested 00:26:16.138 0000:81:00.0 (8086 0a54): vfio-pci -> nvme 00:26:16.138 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:16.138 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:16.138 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:16.138 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:16.138 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:16.397 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:16.397 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:16.397 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:16.397 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:16.658 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:16.658 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:16.658 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:16.658 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:16.920 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:16.920 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:16.920 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:18.296 Cleaning 00:26:18.296 Removing: /var/run/dpdk/spdk0/config 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:18.296 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:18.296 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:18.296 Removing: /var/run/dpdk/spdk1/config 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:18.296 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:18.296 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:18.296 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:18.296 Removing: /var/run/dpdk/spdk2/config 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:18.296 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:18.296 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:18.296 Removing: /var/run/dpdk/spdk3/config 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:18.296 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:18.296 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:18.296 Removing: /var/run/dpdk/spdk4/config 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:18.296 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:18.296 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:18.296 Removing: /dev/shm/bdevperf_trace.pid193146 00:26:18.296 Removing: /dev/shm/bdev_svc_trace.1 00:26:18.296 Removing: /dev/shm/nvmf_trace.0 00:26:18.296 Removing: /dev/shm/spdk_tgt_trace.pid99521 00:26:18.296 Removing: /var/run/dpdk/spdk0 00:26:18.296 Removing: /var/run/dpdk/spdk1 00:26:18.296 Removing: /var/run/dpdk/spdk2 00:26:18.296 Removing: /var/run/dpdk/spdk3 00:26:18.296 Removing: /var/run/dpdk/spdk4 00:26:18.296 Removing: /var/run/dpdk/spdk_pid100367 00:26:18.296 Removing: /var/run/dpdk/spdk_pid101739 00:26:18.296 Removing: /var/run/dpdk/spdk_pid102211 00:26:18.296 Removing: /var/run/dpdk/spdk_pid103128 00:26:18.296 Removing: /var/run/dpdk/spdk_pid103272 00:26:18.296 Removing: /var/run/dpdk/spdk_pid103798 00:26:18.296 Removing: /var/run/dpdk/spdk_pid106861 00:26:18.296 Removing: /var/run/dpdk/spdk_pid108164 00:26:18.296 Removing: /var/run/dpdk/spdk_pid108645 00:26:18.296 Removing: /var/run/dpdk/spdk_pid109109 00:26:18.296 Removing: /var/run/dpdk/spdk_pid109710 00:26:18.296 Removing: /var/run/dpdk/spdk_pid110182 00:26:18.296 Removing: /var/run/dpdk/spdk_pid110380 00:26:18.296 Removing: /var/run/dpdk/spdk_pid110663 00:26:18.296 Removing: /var/run/dpdk/spdk_pid110991 00:26:18.296 Removing: /var/run/dpdk/spdk_pid111452 00:26:18.296 Removing: /var/run/dpdk/spdk_pid114094 00:26:18.296 Removing: /var/run/dpdk/spdk_pid114653 00:26:18.296 Removing: /var/run/dpdk/spdk_pid115217 00:26:18.296 Removing: /var/run/dpdk/spdk_pid115360 00:26:18.296 Removing: /var/run/dpdk/spdk_pid116604 00:26:18.296 Removing: /var/run/dpdk/spdk_pid116790 00:26:18.296 Removing: /var/run/dpdk/spdk_pid118103 00:26:18.296 Removing: /var/run/dpdk/spdk_pid118242 00:26:18.296 Removing: /var/run/dpdk/spdk_pid118803 00:26:18.296 Removing: /var/run/dpdk/spdk_pid118944 00:26:18.296 Removing: /var/run/dpdk/spdk_pid119305 00:26:18.555 Removing: /var/run/dpdk/spdk_pid119515 00:26:18.555 Removing: /var/run/dpdk/spdk_pid120561 00:26:18.555 Removing: /var/run/dpdk/spdk_pid120825 00:26:18.555 Removing: /var/run/dpdk/spdk_pid121064 00:26:18.555 Removing: /var/run/dpdk/spdk_pid121644 00:26:18.555 Removing: /var/run/dpdk/spdk_pid121925 00:26:18.555 Removing: /var/run/dpdk/spdk_pid122156 00:26:18.555 Removing: /var/run/dpdk/spdk_pid122562 00:26:18.555 Removing: /var/run/dpdk/spdk_pid122869 00:26:18.555 Removing: /var/run/dpdk/spdk_pid123282 00:26:18.555 Removing: /var/run/dpdk/spdk_pid123582 00:26:18.555 Removing: /var/run/dpdk/spdk_pid123920 00:26:18.555 Removing: /var/run/dpdk/spdk_pid124292 00:26:18.555 Removing: /var/run/dpdk/spdk_pid124596 00:26:18.555 Removing: /var/run/dpdk/spdk_pid125002 00:26:18.555 Removing: /var/run/dpdk/spdk_pid125299 00:26:18.555 Removing: /var/run/dpdk/spdk_pid125777 00:26:18.555 Removing: /var/run/dpdk/spdk_pid126127 00:26:18.555 Removing: /var/run/dpdk/spdk_pid126741 00:26:18.555 Removing: /var/run/dpdk/spdk_pid127348 00:26:18.555 Removing: /var/run/dpdk/spdk_pid127645 00:26:18.555 Removing: /var/run/dpdk/spdk_pid128063 00:26:18.555 Removing: /var/run/dpdk/spdk_pid128360 00:26:18.555 Removing: /var/run/dpdk/spdk_pid128773 00:26:18.555 Removing: /var/run/dpdk/spdk_pid129083 00:26:18.555 Removing: /var/run/dpdk/spdk_pid129447 00:26:18.555 Removing: /var/run/dpdk/spdk_pid129793 00:26:18.555 Removing: /var/run/dpdk/spdk_pid130134 00:26:18.555 Removing: /var/run/dpdk/spdk_pid130753 00:26:18.555 Removing: /var/run/dpdk/spdk_pid133370 00:26:18.555 Removing: /var/run/dpdk/spdk_pid168119 00:26:18.555 Removing: /var/run/dpdk/spdk_pid170745 00:26:18.555 Removing: /var/run/dpdk/spdk_pid176859 00:26:18.555 Removing: /var/run/dpdk/spdk_pid180121 00:26:18.555 Removing: /var/run/dpdk/spdk_pid182264 00:26:18.555 Removing: /var/run/dpdk/spdk_pid182823 00:26:18.555 Removing: /var/run/dpdk/spdk_pid193146 00:26:18.555 Removing: /var/run/dpdk/spdk_pid193426 00:26:18.555 Removing: /var/run/dpdk/spdk_pid196729 00:26:18.555 Removing: /var/run/dpdk/spdk_pid200732 00:26:18.555 Removing: /var/run/dpdk/spdk_pid202919 00:26:18.555 Removing: /var/run/dpdk/spdk_pid209708 00:26:18.555 Removing: /var/run/dpdk/spdk_pid226403 00:26:18.555 Removing: /var/run/dpdk/spdk_pid228812 00:26:18.555 Removing: /var/run/dpdk/spdk_pid240906 00:26:18.555 Removing: /var/run/dpdk/spdk_pid263497 00:26:18.555 Removing: /var/run/dpdk/spdk_pid264851 00:26:18.555 Removing: /var/run/dpdk/spdk_pid266356 00:26:18.555 Removing: /var/run/dpdk/spdk_pid269235 00:26:18.555 Removing: /var/run/dpdk/spdk_pid273419 00:26:18.555 Removing: /var/run/dpdk/spdk_pid274210 00:26:18.555 Removing: /var/run/dpdk/spdk_pid275120 00:26:18.555 Removing: /var/run/dpdk/spdk_pid275911 00:26:18.555 Removing: /var/run/dpdk/spdk_pid276191 00:26:18.555 Removing: /var/run/dpdk/spdk_pid279182 00:26:18.555 Removing: /var/run/dpdk/spdk_pid279191 00:26:18.555 Removing: /var/run/dpdk/spdk_pid282099 00:26:18.555 Removing: /var/run/dpdk/spdk_pid282493 00:26:18.555 Removing: /var/run/dpdk/spdk_pid282893 00:26:18.555 Removing: /var/run/dpdk/spdk_pid283430 00:26:18.555 Removing: /var/run/dpdk/spdk_pid283552 00:26:18.555 Removing: /var/run/dpdk/spdk_pid286442 00:26:18.555 Removing: /var/run/dpdk/spdk_pid286943 00:26:18.555 Removing: /var/run/dpdk/spdk_pid290186 00:26:18.555 Removing: /var/run/dpdk/spdk_pid292302 00:26:18.555 Removing: /var/run/dpdk/spdk_pid296792 00:26:18.555 Removing: /var/run/dpdk/spdk_pid296824 00:26:18.555 Removing: /var/run/dpdk/spdk_pid310012 00:26:18.555 Removing: /var/run/dpdk/spdk_pid310284 00:26:18.555 Removing: /var/run/dpdk/spdk_pid314222 00:26:18.555 Removing: /var/run/dpdk/spdk_pid314657 00:26:18.555 Removing: /var/run/dpdk/spdk_pid315902 00:26:18.555 Removing: /var/run/dpdk/spdk_pid318260 00:26:18.555 Removing: /var/run/dpdk/spdk_pid96599 00:26:18.555 Removing: /var/run/dpdk/spdk_pid97761 00:26:18.555 Removing: /var/run/dpdk/spdk_pid99521 00:26:18.555 Clean 00:26:18.814 15:03:18 -- common/autotest_common.sh@1437 -- # return 0 00:26:18.814 15:03:18 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:26:18.814 15:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.814 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:26:18.814 15:03:18 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:26:18.814 15:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.814 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:26:18.814 15:03:18 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:26:18.814 15:03:18 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:26:18.814 15:03:18 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:26:18.814 15:03:18 -- spdk/autotest.sh@389 -- # hash lcov 00:26:18.814 15:03:18 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:18.814 15:03:18 -- spdk/autotest.sh@391 -- # hostname 00:26:18.814 15:03:18 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-12 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:26:19.072 geninfo: WARNING: invalid characters removed from testname! 00:26:45.633 15:03:44 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:48.929 15:03:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:51.469 15:03:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:54.008 15:03:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:56.547 15:03:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:26:59.840 15:03:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:02.381 15:04:01 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:02.381 15:04:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:02.381 15:04:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:02.381 15:04:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.381 15:04:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.381 15:04:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.381 15:04:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.381 15:04:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.381 15:04:01 -- paths/export.sh@5 -- $ export PATH 00:27:02.381 15:04:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.381 15:04:01 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:27:02.381 15:04:01 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:02.381 15:04:01 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714136641.XXXXXX 00:27:02.381 15:04:01 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714136641.yQd0EU 00:27:02.381 15:04:01 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:02.381 15:04:01 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:02.381 15:04:01 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:27:02.381 15:04:01 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:02.381 15:04:01 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:02.381 15:04:01 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:02.381 15:04:01 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:27:02.381 15:04:01 -- common/autotest_common.sh@10 -- $ set +x 00:27:02.381 15:04:01 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:27:02.381 15:04:01 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:27:02.381 15:04:01 -- pm/common@17 -- $ local monitor 00:27:02.381 15:04:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:02.381 15:04:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=331187 00:27:02.381 15:04:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:02.381 15:04:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=331189 00:27:02.381 15:04:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:02.381 15:04:01 -- pm/common@21 -- $ date +%s 00:27:02.381 15:04:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=331191 00:27:02.381 15:04:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:02.381 15:04:01 -- pm/common@21 -- $ date +%s 00:27:02.381 15:04:01 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=331193 00:27:02.381 15:04:01 -- pm/common@26 -- $ sleep 1 00:27:02.381 15:04:01 -- pm/common@21 -- $ date +%s 00:27:02.381 15:04:01 -- pm/common@21 -- $ date +%s 00:27:02.381 15:04:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136641 00:27:02.381 15:04:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136641 00:27:02.381 15:04:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136641 00:27:02.381 15:04:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714136641 00:27:02.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136641_collect-bmc-pm.bmc.pm.log 00:27:02.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136641_collect-vmstat.pm.log 00:27:02.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136641_collect-cpu-load.pm.log 00:27:02.381 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714136641_collect-cpu-temp.pm.log 00:27:02.949 15:04:02 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:27:02.949 15:04:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:27:02.949 15:04:02 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:02.949 15:04:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:02.949 15:04:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:02.949 15:04:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:02.950 15:04:02 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:02.950 15:04:02 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:02.950 15:04:02 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:02.950 15:04:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:02.950 15:04:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:02.950 15:04:03 -- pm/common@30 -- $ signal_monitor_resources TERM 00:27:02.950 15:04:03 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:27:02.950 15:04:03 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:02.950 15:04:03 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:02.950 15:04:03 -- pm/common@45 -- $ pid=331213 00:27:02.950 15:04:03 -- pm/common@52 -- $ sudo kill -TERM 331213 00:27:03.208 15:04:03 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:03.208 15:04:03 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:03.208 15:04:03 -- pm/common@45 -- $ pid=331214 00:27:03.208 15:04:03 -- pm/common@52 -- $ sudo kill -TERM 331214 00:27:03.208 15:04:03 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:03.208 15:04:03 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:03.208 15:04:03 -- pm/common@45 -- $ pid=331212 00:27:03.208 15:04:03 -- pm/common@52 -- $ sudo kill -TERM 331212 00:27:03.208 15:04:03 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:03.208 15:04:03 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:03.209 15:04:03 -- pm/common@45 -- $ pid=331205 00:27:03.209 15:04:03 -- pm/common@52 -- $ sudo kill -TERM 331205 00:27:03.209 + [[ -n 10026 ]] 00:27:03.209 + sudo kill 10026 00:27:03.219 [Pipeline] } 00:27:03.232 [Pipeline] // stage 00:27:03.236 [Pipeline] } 00:27:03.253 [Pipeline] // timeout 00:27:03.258 [Pipeline] } 00:27:03.273 [Pipeline] // catchError 00:27:03.278 [Pipeline] } 00:27:03.293 [Pipeline] // wrap 00:27:03.336 [Pipeline] } 00:27:03.360 [Pipeline] // catchError 00:27:03.365 [Pipeline] stage 00:27:03.366 [Pipeline] { (Epilogue) 00:27:03.374 [Pipeline] catchError 00:27:03.375 [Pipeline] { 00:27:03.385 [Pipeline] echo 00:27:03.386 Cleanup processes 00:27:03.391 [Pipeline] sh 00:27:03.680 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:03.680 331324 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:27:03.680 331476 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:03.693 [Pipeline] sh 00:27:03.977 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:03.977 ++ grep -v 'sudo pgrep' 00:27:03.977 ++ awk '{print $1}' 00:27:03.977 + sudo kill -9 331324 00:27:03.988 [Pipeline] sh 00:27:04.275 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:14.355 [Pipeline] sh 00:27:14.645 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:14.645 Artifacts sizes are good 00:27:14.663 [Pipeline] archiveArtifacts 00:27:14.672 Archiving artifacts 00:27:15.294 [Pipeline] sh 00:27:15.575 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:27:15.591 [Pipeline] cleanWs 00:27:15.602 [WS-CLEANUP] Deleting project workspace... 00:27:15.602 [WS-CLEANUP] Deferred wipeout is used... 00:27:15.609 [WS-CLEANUP] done 00:27:15.611 [Pipeline] } 00:27:15.631 [Pipeline] // catchError 00:27:15.643 [Pipeline] sh 00:27:15.929 + logger -p user.info -t JENKINS-CI 00:27:15.938 [Pipeline] } 00:27:15.954 [Pipeline] // stage 00:27:15.959 [Pipeline] } 00:27:15.976 [Pipeline] // node 00:27:15.981 [Pipeline] End of Pipeline 00:27:16.018 Finished: SUCCESS